Blog Archives

How to pilot an experiment

I got a serious question for you: What the fuck are you doing? This is not shit for you to be messin’ with. Are you ready to hear something? I want you to see if this sounds familiar: any time you try a decent crime, you got fifty ways you’re gonna fuck up. If you think of twenty-five of them, then you’re a genius… and you ain’t no genius.
Body Heat (1981, Lawrence Kasdan)

To consult the statistician after an experiment is finished is often merely to ask him to conduct a post-mortem examination. He can perhaps say what the experiment died of.
R.A. Fisher (1938)

Don’t crash and burn your experiment.

Doing a pilot run of a new psychology experiment is vital. No matter how well you think you’ve designed and programmed your task, there are (almost) always things that you didn’t think of. Going ahead and spending a lot of time and effort collecting a set of data without running a proper pilot is (potentially) a recipe for disaster. Several times I’ve seen data-sets where there was some subtle issue with the data logging, or the counter-balancing, or something else, which meant that the results were, at best,  compromised, and at worst completely useless.

All of the resultant suffering, agony, and sobbing could have been avoided by running a pilot study in the right way. It’s not sufficient to run through the experimental program a couple of times; a comprehensive test of an experiment has to include a test of the analysis as well. This is particularly true of any experiment involving methods like fMRI/MEG/EEG where a poor design can lead to a data-set that’s essentially uninterpretable, or perhaps even un-analysable. You may think you’ve logged all the data variables you think you’ll need for the analysis, and your design is a work of art, but you can’t be absolutely sure unless you actually do a test of the analysis.

This might seem like over-kill, or a waste of effort, however, you’re going to have to design your analysis at some point anyway, so why not do it at the beginning? Analyse your pilot data in exactly the way you’re planning on analysing your main data, save the details (using SPSS syntax, R code, SPM batch jobs – or whatever you’re using) and when you have your ‘proper’ data set, all you’ll (in theory) have to do is plug it in to your existing analysis setup.

These are the steps I normally go through when getting a new experiment up and running. Not all will be appropriate for all experiments, your mileage may vary etc. etc.

1. Test the stimulus program. Run through it a couple of times yourself, and get a friend/colleague to do it once too, and ask for feedback. Make sure it looks like it’s doing what you think it should be doing.

2. Check the timing of the stimulus program. This is almost always essential for a fMRI experiment, but may not be desperately important for some kinds of behavioural studies. Run through it with a stopwatch (the stopwatch function on your ‘phone is probably accurate enough). If you’re doing any kind of experiment involving rapid presentation of stimuli (visual masking, RSVP paradigms) you’ll want to do some more extensive testing to make sure your stimuli are being presented in the way that you think – this might involve plugging a light-sensitive diode into an oscilloscope, sticking it to your monitor with a bit of blu-tack and measuring the waveforms produced by your stimuli. For fMRI experiments the timing is critical. Even though the Haemodynamic Response Function (HRF) is slow (and somewhat variable) you’re almost always fighting to pull enough signal out of the noise, so why introduce more? A cumulative error of only a few tens of milliseconds per trial can mean that your experiment is a few seconds out by the end of a 10 minute scan – this means that your model regressors will be way off – and your results will likely suck.*

3. Look at the behavioural data files. I don’t mean do the analysis (yet), I mean just look at the data. First make sure all the variables you want logged are actually there, then dump it into Excel and get busy with the sort function. For instance, if you have 40 trials and 20 stimuli (each presented twice) make sure that each one really is being presented twice, and not some of them once, and some of them three times; sorting by the stimulus ID should make it instantly clear what’s going on. Make sure the correct responses and any errors are being logged correctly. Make sure the counter-balancing is working correctly by sorting on appropriate variables.

4. Do the analysis. Really do it. You’re obviously not looking for any significant results from the data, you’re just trying to validate your analysis pipeline and make sure you have all the information you need to do the stats. For fMRI experiments – look at your design matrix to see that it makes sense and that you’re not getting warnings about non-orthogonality of the regressors from the software. For fMRI data using visual stimuli, you could look at some basic effects (i.e. all stimuli > baseline) to make sure you get activity in the visual cortex. Button-pushing responses should also be visible as activity in the motor cortex in a single subject too – these kinds of sanity checks can be a good indicator of data quality. If you really want to be punctilious, bang it through a quick ICA routine and see if you get a) component(s) that look stimulus-related, b) something that looks like the default-mode network, and c) any suspiciously nasty-looking noise components (a and b = good, c = bad, obviously).

5. After all that, the rest is easy. Collect your proper set of data, analyse it using the routines you developed in point 4. above, write it up, and then send it to Nature.

And that, ladeez and gennulmen, is how to do it. Doing a proper pilot can only save you time and stress in the long run, and you can go ahead with your experiment in the certain knowledge that you’ve done everything in your power to make sure your data is as good as it can possibly be. Of course, it still might be total and utter crap, but that’ll probably be your participants’ fault, not yours.

Happy piloting! TTFN.

*Making sure your responses are being logged with a reasonable level of accuracy is also pretty important for many experiments, although this is a little harder to objectively verify. Hopefully if you’re using some reasonably well-validated piece of software and decent response device you shouldn’t have too many problems.

Advertisements

Data-mining in neuroscience – the next great frontier?

The connectome - some people think this is what represents 'you' in the brain. Yes, 'you'.

The really-very-excellent Ben Thomas (of The Connectome) recently posted something on facebook which got me thinking; it was a link to a project called NeuroSynth, which is an ongoing collaboration between several high-profile brain researchers and groups (details here) to provide an easy method for performing automated large-scale analyses (or meta-analyses) across a large portion of the neuroimaging literature. Briefly, the builders of this system have developed a way of automatically parsing the full text of published articles, and extracting 1) the parts of the brain which are active (as reported in the paper by a commonly-used 3-axis coordinate system) and 2) the topic of the paper (by looking at which terms are used with high frequency in the paper). Using these two bits of information, a huge meta-analysis is then conducted, and brain-maps showing areas which are reliably associated with particular terms in the literature can be produced. Wonderfully, they’ve made the brain maps available on the web, and you can even download these maps in the standard NIFTI (*.nii) format.

Give it a try with some common terms, e.g.:

http://neurosynth.org/terms/pain

http://neurosynth.org/terms/disgust

http://neurosynth.org/terms/memory

Fun, huh? One of the best applications that immediately springs to mind when looking at these data is that these brain maps could be used to constrain the search-space in new brain-imaging experiments – for instance, by using these maps to define ROIs for hypothesis-driven analyses (something which I’m very keen on), or for defining regions for multi-voxel-pattern-analysisRead the rest of this entry

None of your data is safe. Ever.

It occurred to me recently that I had never addressed one of the most important and fundamental issues involved in computer use – the implementation of a sensible and secure backup policy.

A lovely 2Tb 'My Book World Edition' NAS server, by Western Digital. If you live in a shared house, you need one of these.

Yeah, yeah, I know what you’re thinking – “yawn“. However, when (thats when not if) your laptop hard-drive fries itself and you lose your 10,000 word thesis because you haven’t backed it up, don’t come whingeing and crying to me. A quick google search for “I lost my essay” turns up 1.4 million results, and most of them are tales of abject woe and desperation. Any form of data-recording media is vulnerable to catastrophic failure, and the chances of getting your data back once that happens are slim-to-nothing.* In this world of laptops and portable storage it’s not just mechanical failure that’s the problem either – laptops/hard-drives/USB keys can very easily get lost, dropped, or stolen.

Nowadays data storage densities are so ridiculously cheap that you really have no excuse for not making adequate backups. Plus, the availability of Cloud-based storage services like Dropbox  and Google Docs can also make life easier. A truly ultra-secure backup system usually involves three copies of all your important data – one ‘working’ copy (say, on your laptop hard drive), a primary backup (say, an external USB hard drive) and a secondary backup in a separate location (another external hard drive which you keep at your friend’s house). This way, if one drive fails you always have two backups, and even in the worst possible scenario of your house burning down (destroying your laptop and primary backup) you still have your secondary backup.

This might be over-kill for the purposes of most students though. If I were still an impecunious student, living in shared accomodation, the system I would implement would be this: Read the rest of this entry

Dorothy Bishop on Making up Data

A quickie just to point you to an outstanding post by the never-less-than-excellent Dorothy Bishop (or @deevybee, and yes, you should be following her on twitter) on basic data simulation using Excel. Some brilliant and quite creative ways to use Excel for demos of simple statistical concepts – will definitely be incorporating this into some of my presentations/teaching on stats.