A collection of links on academic email etiquette

Don't email Brian and call him 'Mr Cox'. It makes him have a bad hair day. :o(

Don’t email Brian and call him ‘Mr Cox’. It gives him a bad hair day.

When I was an undergraduate student, email was still not widely used, and the idea of emailing a lecturer or professor would have been quite daunting. Times have changed however, and nowadays most academics deal with a steady stream of emails from students throughout the year. This is a good thing in many ways; it helps to break down barriers between the staff and students and can be a very efficient way to communicate. Unfortunately many students don’t follow some basic rules of general politeness when contacting staff and this leads to faculty members getting irritated, and students receiving witheringly sarcastic responses or links to let me Google that for you.

Here are a few pieces I’ve collected that set out precisely how best to communicate with your advisor, lecturer or professor. First of all, we have a guide from Wellesley College titled How to Email Your Professor, shared on Twitter by Tom Hartley. Tom also went to the bother of conducting a survey about this kind of thing, and presented the results on his blog. He highlights some interesting cultural differences, particularly between the UK and the US – well worth reading through.

Next, is another set of guidelines from Akira O’Connor, also with some interesting contributions from others in the comments.

Last is a really terrific set of slides by Cedar Riener, which you can find here. He also provides a sample ‘How to miss a class’ e-mail, with added annotations here. Brilliant.

I won’t bother repeating much of what these excellent sources suggest, except to say that the common threads through them all seem to be:

1) Be polite, and relatively formal (at least at first).
2) Don’t ask stupid questions.
3) Don’t make stupid (i.e. any) spelling and grammar mistakes.
4) For the love of all that is good and holy, get the name and title of the person you’re emailing correct.

How hard can that be, eh?

Advertisements

Some notes on the use of voice keys in reaction time experiments

prod_MLE1312web
Cedrus SV-1 voice key device

Somebody asked me about using a voice key device the other day, and I realised it’s not something I’d ever addressed on here. A voice key is often used in experiments where you need to obtain a vocal response time, for instance in a vocal Stroop experiment, or a picture-naming task.

There are broadly two ways of doing this. The first is easy, but expensive, and not very good. The second is time-consuming, but cheap and very reliable.

The first method involves using a bit of dedicated hardware, essentially a microphone pre-amp, which detects the onset of a vocal response, and sends out a signal when it  occurs. The Cedrus SV-1 device pictured above is a good example. This is easy, because you have all your vocal reaction times logged for you, but not totally reliable because you have to pre-set a loudness threshold for the box, and it might miss some responses, if the person just talks quietly, or there’s some unexpected background noise. It should be relatively simple to get whatever stimulus software you’re running to recognise the input from the device and log it as a response.

The other way is very simple to set up, in that you just plug a microphone into the sound card of your stimulus computer and record the vocal responses on each trial as .wav files. Stimulus software like PsychoPy can do this very easily. The downside to this is that you then have to take those sound files and examine them in some way in order to get the reaction time data out – this could mean literally examining the waveforms for each trial in a sound editor (such as Audacity), putting markers on the start of the speech manually, and calculating vocal RTs relative to the start of the file/trial. This is very reliable and precise, but obviously reasonably time-consuming. Manually putting markers on sound files is still the ‘gold standard’ for voice-onset reaction times. Ideally, you should get someone else to do this for you, so they’ll be ‘blind’ to which trials are which, and unbiased in calculating the reaction times. You can also possibly automate the process using a bit of software called SayWhen (paper here).

Example of a speech waveform, viewed in Audacity

Example of a speech waveform, viewed in Audacity

Which method is best depends largely on the number of trials you have in your experiment. The second method is definitely superior (and cheaper, easier to set up) but if you have eleventy-billion trials in your experiment, manually examining them all post hoc may not be very practical, and a more automatic solution might be worthwhile. If you were really clever you could try and do both at once – have two computers set up, the first running the stimulus program, and the second recording the voice responses, but also running a bit of code that signals the first computer when it detects a voice onset. Might be tricky to set up and get working, but once it was, you’d have all your RTs logged automatically on the first computer, plus the .wav files recorded on the second for post hoc analysis/data-cleaning/error-checking etc. if necessary.

Happy vocalising!

***UPDATE***

Two researchers have pointed out in the comments, that a system for automatically generating response times from sound-files already exists, called CheckVocal. It seems to be designed to work with the DMDX experimental programming system (free software that uses Microsoft’s DirectX system to present stimuli). Not sure if it’ll work with other systems or not, but worth looking at… Have also added the information to my Links page.

Back to school special

Funny-Back-to-School-Sign

Unimatrix-0 High School has really excellent attendance and discipline statistics

So, another academic year is about to hove into view over the horizon, and what better time to take stock of your situation, make sure your gear is fit for purpose, and think about levelling-up your geek skills to cope with the rigours of the next year of academic life. If you need any hardware, Engadget’s Back to School review guides are a great place to start, and have reviews of all kinds of things from smartphones to gaming systems, all arranged helpfully in several price categories.

If you really want to be ahead of the game this year though, you’ll need to put in a bit of extra time and effort, and learn some new skills. Here are my recommendations for what computing skills psychology students should be learning, for each year of a standard UK BSc in Psychology.*

If you’re starting your 1st year…

A big part of the first year is about learning basic skills like academic writing, synthesising information, referencing etc. Take a look at my computer skills checklist for psychology students and see how you measure up. Then, the first thing you need to do, on day one, is start using a reference manager. This is an application that will help you organise journal articles and other important sources for your whole degree, and will even do your essay referencing for you. I like Mendeley, but Zotero is really good as well. Both are totally free. Download one of them right now. This is honestly the best bit of advice I can possibly give to any student. Do it. I just can’t emphasise this enough. Really. OK. Moving on.

Next you need to register for a Google account, if you don’t have one already. Here’s why. Then use your new Google username to sign up for Feedly and start following some psychology and neuroscience blogs. Here and here are some good lists to get you started. If you’re a real social-media fiend, sign up for Twitter and start following some of these people.

You may want to use the 5GB of free storage you get with Google Drive as a cloud back-up space for important documents, or you may want to sign up for a Dropbox account as well. Use one or the other, or preferably both, because none of your data is safe. Ever.

You’ll want to start getting to know how to use online literature databases. Google Scholar is a good start, but you’ll also need to get familiar with PubMed, PsycInfo and Web of Knowledge too.

If you’re really keen and want to learn some general skills that will likely help you out in the future, learn how to create a website with WordPress or Github Pages.  Or maybe download Gimp and get busy with learning some picture editing.

If you’re starting your 2nd year…

This is when things get more serious and you probably can’t expect to turn up to tutorials with an epically massive hangover and still understand everything that’s going on. Similarly, you need to step it up a level with the geekery as well.

You probably learned some SPSS in your statistics course in the first year. That’s fine, but you probably don’t have a licence that allows you to play with it on your own computer. PSPP is the answer – it’s a free application that’s made to look and work just like SPSS – it even runs SPSS syntax code. Awesomes. Speaking of which, if you’re not using the syntax capabilities of SPSS and doing it all through the GUI, you’re doing it wrong. 

If you really want to impress, you’ll start using R for your lab reports. The seriously hardcore will just use the base R package, but don’t feel bad if you want to use R-Commander or Deducer to make life a bit easier. Start with the tutorials here.

If you’re starting your 3rd year…

This is the year when you’ll probably have to do either a dissertation, a research project, or maybe both. If you’re not using a reference manager already, trying to do a dissertation without one is utter lunacy – start now.

For your research project, try and do as much of it as you can yourself. If you’re doing some kind of survey project, think about doing it online using Google Forms, or LimeSurvey. If you’re doing a computer-based task, then try and program it yourself using PsychoPy. Nothing will impress your project supervisor more than if you volunteer to do the task/survey set-up yourself. Then of course you can analyse the data using the mad statz skillz you learned in your second year. Make some pretty looking figures for your final report using  the free, open-source Veusz.

Learning this stuff might all sound like a lot to ask when you also have essays to write, tutorials to prepare for, and parties to attend. However, all these things are really valuable CV-boosting skills which might come to be invaluable after you graduate. If you want to continue studying at Masters or PhD level, potential supervisors will be looking for applicants with these kinds of skills, and solid computer knowledge can also help to distinguish you from all the other psychology graduates when applying for ‘normal’ jobs too. It really is the best thing you can learn, aside from your course material, naturally.

Have I missed anything important? Let me know in the comments!

Good luck!

* I realise US colleges and other countries have a different structure, but I think these recommendations will still broadly apply.

Some new bits of stats software and some miscellaneous links

the-linksHi kids. Two new pieces of stats/plotting software for you, plus some other stuff.

First up is a new (to me, anyway) scientific plotting package called Veusz. It’s written in Python, is completely free and open-source, works with any OS, and basically looks pretty useful. I’ve been using Prism for a while now, but I’ll definitely try out Veusz next time I need to do some plotting – would prefer to use something open-source.

The new statistics software is called Wizard, and is unfortunately a paid application, and just developed for Macs. If you’re dissatisfied with SPSS (and let’s be honest, who isn’t?) it might be worth the $79 price though. Haven’t tried it out yet personally, but it looks really, really nice in terms of the interface, and seems fairly comprehensive in terms of features as well. Definitely one to think about for Mac users.

Next up is a new reference manager called Paperpile. I’m a big fan of Mendeley, but some of Paperpile’s features are pretty attractive – it lives as a Chrome extension, and uses Google Drive for online storage of the PDFs. Pretty nice. Unfortunately it’s still in a private Beta phase and will cost $29/per year when it’s released.

I was thinking about a new web-page recently, and solicited opinions for which was the current best build-me-a-free-website service. The extremely helpful @Nonmonotonix suggested using Github Pages to both design and host sites – looks like an excellent system. He even wrote a set of instructions on his blog here, for how to get started with Github pages. Another good suggestion was something called Bootstrap, which has the promising tagline “By nerds, for nerds.”

Lastly, a couple of packages for neuroimagers. I’ve just been made aware of a really good collaborative, open-source software project for the analysis of EEG/MEG data – called BrainStorm. Looks like a very capable suite of tools. I’ve also just come across the PyMVPA project, which does exactly what it says on the tin – Multivariate Pattern Analysis in Python. Nice.

All of these links, and many, many more can of course be found on my newly-updated Links page.

Toodle-oo.

MakeHuman – free, open-source software for 3D-modelling of humans

It’s always exciting when you find a new piece of cool software to play with, and even more so when what you’ve found is totally free, open-source, and available on all platforms. So it is with MakeHuman – an utterly awesome bit of kit. I wrote a piece before about FaceGen, which is also pretty cool, but MakeHuman takes it to the next level, by modelling all kinds of body characteristics as well as faces, and, of course, doing it all for free.

I’ve just downloaded it and played with it for a few minutes, but I’m already impressed by the range of options available. Through a very simple slider and radio-button based interface you have very fine control over all kinds of variables, including gender, weight, age, height, and many more, with endless fine tweak-ability possible of body and face if you dig through the options. There are also basic libraries of clothes and poses included. Here’s a well, a human, I made in just a couple of minutes:

Screen Shot 2013-08-08 at 09.45.18

And here’s a close-up of the face, after I added some hair and gave him a nasty expression:

Screen Shot 2013-08-08 at 09.47.05

Pretty cool indeed. This could potentially be a massively useful tool for people interested in face/body perception – using this, one could generate a large number of highly-controlled experimental stimuli that just differ in one aspect (say, weight, or race… whatever) very easily and quickly. Download it and have a play around!

Dr MacLove, or How I Learned to Stop Worrying and Love Apple

tumblr_lvuw7pdsFg1r2r3c4o1_500Back in the 90s it was easy; if you were a graphic designer, or some kind of proto-hipster with a trust fund you used a Mac. Everyone else used a PC. Then in the 2000s Apple started making iThings, everyone started going absolutely batshit crazy over them, and suddenly Macs were everywhere as well.

I’ve used both in parallel since about 2003 – I started off with a G5 power mac as a desktop complemented by a Windows laptop, but that’s now reversed with a Windows 7 PC on my desk at work, and a MacBook Air. This shift was significant – the desktop is what’s provided to me by my job, the laptop is my personal computer; what I choose to buy for myself.  Despite using OS X since 2003 I only really started liking it when I got my first Apple laptop – a 2009 MacBook Pro. This was also around the time that I got an iPhone 3G, which seemed like some incredible advanced artefact from the future compared to the chunky ‘smartphone’ I was using before that ran Windows Mobile 6.5; an unbelievably awful OS which I could never get to work as it should. I’ve since swapped the Pro for a 2012 MacBook air, bought an iPad mini, and am on my third iPhone, so my conversion is pretty much complete. I’ve looked at Android ‘phones and tablets, honestly, I have. Some of them are very nice, but the OS just always seems too…  busy. Maybe it’s my age, but I just want something I can pick up and use without a massive learning curve. I’m happy to stand up and say I’m an Apple-guy, and it took a while, but I’m finally actually OK with that.

It took a while, but I’ve now found Mac versions or fairly close equivalents for all the software I used on my PC. At first I sometimes used to boot into Windows using bootcamp to use a couple of applications, but I deleted the partition a while ago – I just wasn’t using it anymore. I probably won’t be spending money on any Windows machines for the foreseeable future. I know that Mac vs. Windows is one of the most hackneyed, pointless and bitter debates on the entire internet, but I just couldn’t resist setting my own bit of troll-bait out. Here, then, are the major reasons I became a Mac convert – your mileage may vary, personal opinions only, blah blah.

The MacBook Air
The Air is the machine that kicked off the ultrabook trend and, to my mind, PC manufacturers have still yet to equal the Air’s amazing combination of power, usability and portability. My 2012 model is greased-lightning-off-a-shovel fast – it chews through a set of fMRI pre-processing twice as fast as my old MB Pro, and that was no slouch either. The 2013 models are even faster, with better graphics and a frankly ridiculous 12-hour battery life. If you can live with a relatively small (128/256Gb) amount of storage, it’s a peach of a machine. Plus, I can carry it around all day and barely even notice the weight. For my money, the Air is the best value computer out there – I don’t think the step-up in performance you get with the Pro is worth the price, personally.

The Apple Trackpad
Using the trackpad on a Windows laptop feels like going back to the stone age after you’re used to the fantastic set of multi-touch gestures on an Apple laptop. Have never found one on a PC that even comes close.

Migration Assistant
Remember the excitement of getting a new computer and then the agony of re-installing all your applications, and tweaking the system to get it the way you like it? That pain doesn’t exist for me anymore. Apple’s Migration Assistant lets you do a time-machine back-up of your old computer on to an external  drive, plug that into the new one and everything is reproduced; your applications, desktop, OS settings, bookmarks, everything. It’s awesome.

Expose/spaces
OS X’s system of virtual desktops is brilliant, and essential for me, now that I’ve got used to it; flipping between desktops with ctrl+left/right arrow keys is fast and smooth, and means you can really extend the limits of what can be done on a 13″ laptop screen. I have no idea why Windows doesn’t implement virtual desktops.

Unix
In the last couple of years I’ve switched to using FSL as my main fMRI-analysis platform. FSL is developed on Macs, runs well on other Unix systems, but needs some kind of unix-emulation to run on Windows. Urgh – forget it. I do like being able to open up a terminal and institute little tweaks to the OS and applications as well. Of course Matlab/SPM and BrainVoyager also run beautifully on OS X.

Installing/Uninstalling
To install an application on OS X you drag it to a folder. To uninstall it you drag it to the recycle bin. That’s it.

Mac-only software
Osirix is without any shadow of a doubt, the best free DICOM image viewer available, and it’s Mac-only. Other things like Automator I’d really miss too, plus of course Apple’s super-fast and comprehensive spotlight search is awesome.

No crapware
You know all that shit you have to uninstall as soon as you get a new PC? Free trials of anti-virus software, media players, desktop icons that link to shitty Yahoo services you have no intention of ever using? Doesn’t exist in OS X.

 

Having said all that, of course there are annoying things that drive me crazy about OS X too. No system is perfect after all…

No Cut/Paste
You can copy and paste files between two file locations, but you can’t CUT and then paste. Seriously Apple, is this really a problem?

Annoying behaviour of the green button
The green button at the top of the window that I still think of as the ‘maximise’ button – it’s annoying. It seems to re-size the window pretty much randomly. I hate it.

ITunes
For the love of all that is holy Apple, will you please do something about the benighted clusterfuck that is iTunes? It’s utterly heinous.

Feel free to disagree with me in the comments. If you think Windows 8 is the greatest OS ever devised, please say so. Personally I think it’s a botched, compromise that tries to bring touch-functionality to laptops and laptop-functionality to tablets and does neither well, but hey, that’s just my opinion. Windows is like Star Trek movies – every other one in the series is decent, which means Windows 9 should actually be pretty usable.

Anyway – flame on!

Psychology experiments enter the post-PC era: OpenSesame now runs on Android

smartphones-picard-uses-androidI’ve mentioned OpenSesame briefly on here before, but for those of you who weren’t keeping up, it’s a pretty awesome, free psychology experiment-developing application, built using the Python programming language, and it has a lot in common with PsychoPy (which is also awesome).

The recently-released new version of OpenSesame has just taken an important step, in that it now supports the Android mobile operating system, meaning that it can run natively on Android tablets and smartphones. As far as I’m aware, this is the first time that a psychology-experimental application has been compiled (and released to the masses) for a mobile OS.

This is cool for lots of reasons. It’s an interesting technical achievement; Android is a very different implementation to a desktop OS, being focused heavily on touch interfaces. Such interfaces are now ubiquitous, and are much more accessible, in the sense that people who may struggle with a traditional mouse/keyboard can use them relatively easily. Running psychology experiments on touch-tablets may enable the study of populations (e.g., the very young, very old, or various patient groups) that would be very difficult with a more ‘traditional’ system. Similarly, conducting ‘field’ studies might be much more effective; I can imagine handing a participant a tablet for them to complete some kind of task in the street, or in a shopping mall, for instance. Also, it may open up the possibility of using the variety of sensors in modern mobile devices (light, proximity, accelerometers, magnetometers) in interesting and creative ways. Finally, the hardware is relatively cheap, and (of course) portable.

I’m itching to try this out, but unfortunately don’t have an Android tablet. I love my iPad mini for lots of reasons, but the more restricted nature of Apple’s OS means that it’s unlikely we’ll see a similar system on iOS anytime soon.

So, very exciting times. Here’s a brief demo video of OpenSesame running on a Google Nexus 7 tablet (in the demo the tablet is actually running a version of Ubuntu Linux, but with the new version of OpenSesame it shouldn’t be necessary to replace the Android OS). Let me know in the comments if you have any experience with tablet-experiments, or if you can think of any other creative ways they could be used.

TTFN.

 

Open-Source software for psychology and neuroscience

microsoft-communismResearchers typically use a lot of different pieces of software in the course of their work; it’s part of what makes the job so varied. Separate packages might be used for creating experimental stimuli, programming an experiment, logging data, statistical analysis, and preparing work for publication or conferences. Until fairly recently there was little option but to use commercial software in at least some of these roles. For example, SPSS is the de facto analysis tool in many departments for statistics, and the viable alternatives were also commercial – there was little choice but to fork over the money. Fortunately, there are now pretty viable alternatives for cash-strapped departments and individual researchers. There’s a lot of politics around the open-source movement, but for most people the important aspect is that the software is provided for free, and (generally) it’s cross-platform (or can be compiled to be so). All that’s required is to throw off the shackles of the evil capitalist oppressors, or something. 

So, there’s a lot of software listed on my Links page but I thought I’d pick out my favourite bits of open-source software, that are most useful for researchers and students in psychology.

First up – general office-type software; there are a couple of good options here. The Open Office suite has been around for 20 years, and contains all the usual tools (word processor, presentation-maker, spreadsheet tool, and more). It’s a solid, well-designed system that can pretty seamlessly read and write the Microsoft Office XML-based (.docx, .pptx) file formats. The other option is Libre Office, which has the same roots as Open Office, and similar features. Plans are apparently underway to port Libre Office to iOS and Android – nice. The other free popular options for presentations is, of course, Prezi.

There are lots of options for graphics programs, however the two best in terms of features are without a doubt GIMP (designed to be a free alternative to Adobe Photoshop) and Inkscape (vector graphics editor – good replacement for Adobe Illustrator). There’s a bit of a steep learning curve for these, but that’s true of their commercial counterparts too.

Programming experiments – if you’re still using a paid system like E-Prime or Presentation, you should consider switching to PsychoPy – it’s user-friendly, genuinely cross-platform, and absolutely free. I briefly reviewed it before, here.  Another excellent option is Open Sesame.

For statistical analysis there are a couple of options. Firstly, if you’re a SPSS-user and pretty comfortable with it (but fed up of the constant hassles of the licensing system), you should check out PSPP; a free stats program designed to look and feel like SPSS, and replicate many of the functions. You can even use your SPSS syntax – awesome. The only serious issue is that it doesn’t contain the SPSS options for complex GLM models (repeated measures ANOVA, etc.). Hopefully these will be added at some future point. The other popular option is the R language for statistical computing. R is really gaining traction at the moment. The command-line interface is a bit of a hurdle for beginners, but that can be mitigated somewhat by IDEs like R-Commander or RStudio.

For neuroscience there’s the NeuroDebian project – not just a software package, but an entire operating system, bundled with a comprehensive suite of neuroscience tools, including FSL, AFNI and PyMVPA, plus lots of others. There really are too many bits of open-source neuro-software to list here, but a good place to find some is NITRC.org.

So, there you are people; go open-source. You have nothing to lose but your over-priced software subscriptions.

TTFN.

BPS Hackathon – 21st June; LaTeX, R, Python goodness

Very exciting news here: I’ve just been invited to the first British Psychological Society (Maths, Statistics and Computing Section) Psychology open textbook hackathon!

Inspired by this event (where people got together and wrote an open-source maths textbook in a weekend) the day aims to raise awareness and skills, as well as perhaps produce some usable output.

The organisers are Thom Baguley of Nottingham Trent University (and the Serious Stats blog and book) and Sol Nte of Manchester University. They’ve very kindly invited me as a guest, so I’ll be hanging out and learning some new tricks myself, I’m sure.

Here’s the flyer for the event, with sign-up details etc. It’s free, but strictly limited to 20 places – if you’re keen, best be quick… (click the pic below for a bigger version):

 

BPS_hackathon

Comment on the Button et al. (2013) neuroscience ‘power-failure’ article in NRN

Statistical Spidey knows the score.

Statistical Spidey knows the score.

An article was published in Nature Reviews Neuroscience yesterday which caused a bit of a stir among neuroscientists (or at least among neuroscientists on Twitter, anyway). The authors cleverly used meta-analytic papers to estimate the ‘true’ power of an effect, and then (using the G*Power software) calculated the power for each individual study that made up the meta-analysis, based on the sample size of each one. Their conclusions are pretty damning for the field as a whole: an overall value of 21%, dropping to 8% in some sub-fields. This means that out of 100 studies that are conducted into a genuine effect, only 21 will actually demonstrate it.

The article has been discussed and summarised at length by Ed Yong, Christian Jarrett, and by Kate Button (the study’s first author) on Suzy Gage’s Guardian blog, so I’m not going to re-hash it any more here. The original paper is actually very accessible and well-written, and I encourage interested readers to start there. It’s definitely an important contribution to the debate, however (as always) there are alternative perspectives. I generally have a problem with over-reliance on power analyses (they’re often required for grant applications, and other project proposals). Prospective power analyses (i.e. those conducted before a piece of research is conducted, in order to tell you how many subjects you need) use an estimate of the effect size you expect to achieve – usually derived from previous work that has examined a (broadly) similar problem using (broadly) similar methods. This estimate is essentially a wild shot in the dark (especially because of some of the issues and biases discussed by Button et al., that are likely to operate in the literature), and the resulting power analysis therefore tells you (in my opinion) nothing very useful. Button et al. get around this issue by using the effect size from meta-analyses to estimate the ‘true’ effect size in a given literature area – a neat trick.

The remainder of this post deals with power-issues in fMRI, since it’s my area of expertise, and necessarily gets a bit technical. Readers who don’t have a somewhat nerdy interest in fMRI-methods are advised to check out some of the more accessible summaries linked to above. Braver readers – press on!

An alternative approach used in the fMRI field, and one that I’ve been following when planning projects for years, is a more empirical method. Murphy and Garavan (2004) took a large sample of 58 subjects who had completed a Go/No-Go task and analysed sub-sets of different sizes to look at the reproducibility of the results, with different sample sizes. They showed that reproducibility (assessed by correlation of the statistical maps with the ‘gold standard’ of the entire dataset; Fig. 4) reaches 80% at about 24 or 25 subjects. By this criterion, many fMRI studies are underpowered.

While I like this empirical approach to the issue, there are of course caveats and other things to consider. fMRI is a complex, highly technical research area, and heavily influenced by the advance of technology. MRI scanners have significantly improved in the last ten years, with 32 or even 64-channel head-coils becoming common, faster gradient switching, shorter TRs, higher field strength, and better field/data stability all meaning that the signal-to-noise has improved considerably. This serves to cut down one source of noise in fMRI data – intra-subject variance. The inter-subject variance of course remains the same as it always was, but that’s something that can’t really be mitigated against, and may even be of interest in some (between-group) studies. On the analysis side, new multivariate methods are much more sensitive to detecting differences than the standard mass-univariate approach. This improvement in effective SNR means that the Murphy and Garavan (2004) estimate of 25 subjects for 80% reproducibility may be somewhat inflated, and with modern techniques one could perhaps get away with less.

The other issue with the Murphy and Garavan (2004) approach is that it’s not very generalisable. The Go/No-Go task is widely used and is a ‘standard’ cognitive/attentional task that activates a well-described brain network, but other tasks may produce more or less activation, in different brain regions. Signal-to-noise varies widely across the brain, and across task-paradigms, with simple visual or motor experiments producing very large signal changes and complex cognitive tasks smaller ones. Yet another factor is the experimental design (blocked stimuli, or event-related),  the overall number of trials/stimuli presented, and the total scanning time for each subject, all of which can vary widely.

The upshot is that there are no easy answers, and this is something I try to impress upon people at every opportunity; particularly the statisticians who read my project proposals and object to me not including power analyses. I think prospective power analyses are not only uninformative, but give a false sense of security, and for that reason should be treated with caution. Ultimately the decision about how many subjects to test is generally highly influenced by other factors anyway (most notably, time, and money). You should test as many subjects as you reasonably can, and regard power analysis results as, at best, a rough guide.