Category Archives: Neuroimaging

How to hack conditional branching in the PsychoPy builder

Regular readers will know that I’m a big fan of PsychoPy, which (for non-regular readers; *tsk*) is a piece of free, open-source software for designing and programming experiments, built on the Python language. I’ve been using it a lot recently, and I’m happy to report my initial ardour for it is still lambently undimmed.

The PsychoPy ‘builder’ interface (a generally brilliant, friendly, GUI front-end) does have one pretty substantial drawback though; it doesn’t support conditional branching. In programming logic, a ‘branch’ is a point in a program which causes the computer to start executing a different set of instructions. A ‘conditional branch’ is where the computer decides what to do out of two of more alternatives (i.e. which branch to follow) based on some value or ‘condition’. Essentially the program says, ‘if A is true: do X, otherwise (or ‘else’ in programming jargon) if B is true; do Y’. One common use of conditional branching in psychology experiments is to repeat trials that the subject got incorrect; for instance, one might want one’s subjects to achieve 90% correct on a block of trials before they continue to the next one, so the program would have something in it which said ‘if (correct trials > 90%); then continue to the next block, else if (correct trials < 90%); repeat the incorrect trials’.

At the bottom of the PsychoPy builder is a time-line graphic (the ‘flow panel’) which shows the parts of the experiment:

PsychoPy_builderThe experiment proceeds from left to right, and each part of the flow panel is executed in turn. The loops around parts of the flow panel indicate that the bits inside the loops are run multiple times (i.e. they’re the trial blocks). This is an extremely powerful interface, but there’s no option to ‘skip’ part of the flow diagram – everything is run in the order in which it appears from left-to-right.

This is a slight issue for programming fMRI experiments that use block-designs. In block-design experiments, typically two (or more) blocks of about 15-20 seconds are alternated. They might be a ‘rest’ block (no stimuli) alternated with a visual stimulus, or two different kinds of stimuli, say, household objects vs. faces. For a simple two-condition-alternating experiment one could just produce routines for two conditions, and throw a loop around them for as many repeats as needed. The problem arises when there are more than two block-types in your experiment, and you want to randomise them (i.e. have a sequence which goes ABCBCACAB…etc.). There’s no easy way of doing this in the builder. In an experiment lasting 10 minutes one might have 40 15-second blocks, and the only way to produce the (psuedo-random) sequence you want is with 40 separate elements in the flow panel that all executed one-by-one (with no loops). Building such a task would be very tedious, and more importantly, crashingly inelegant. Furthermore, you probably wouldn’t want to use the same sequence for every participant, so you’d have to laboriously build different versions with different sequences of the blocks. There’s a good reason for why this kind of functionality hasn’t been implemented; it would make the builder interface much more complicated and the PsychoPy developers are (rightly) concerned with keeping the builder as clean and simple as possible.

Fortunately, there’s an easy little hack which was actually suggested by Jon Peirce (and others) on the PsychoPy users forum. You can in fact get PsychoPy to ‘skip’ routines in the flow panel, by the use of loops, and a tiny bit of coding magic. I thought it was worth elaborating the solution on here somewhat, and I even created a simple little demo program which you can download and peruse/modify.

So, this is how it works. I’ve set up my flow panel like this:

flow

So, there are two blocks, each of which have their own loop, a ‘blockSelect’ routine, and a ‘blockSelectLoop’ enclosing the whole thing. The two blocks can contain any kind of (different) stimulus element; one could have pictures, and one could have sounds, for instance – I’ve just put some simple text in each one for demo purposes. The two block-level loops have no condition files associated with them, but in the ‘nReps’ field of their properties box I’ve put a variable ‘nRepsblock1’ for block1 and ‘nRepsblock2’ for block2. This tells the program how many times to go around that loop. The values of these variables are set by the blockSelect routine which contains a code element, which looks like this:

Screen Shot 2013-11-12 at 09.58.02

The full code in the ‘Begin Routine’ box above is this:

if selectBlock==1:
 nRepsblock1=1
 nRepsblock2=0
elif selectBlock==2:
 nRepsblock1=0
 nRepsblock2=1

This is a conditional branching statement which says ‘if selectBlock=1, do X, else if selectBlock=2, do Y’. The variable ‘selectBlock’ is derived from the conditions file (an excel workbook) for the blockSelectLoop, which is very simple and looks like this:

Screen Shot 2013-11-12 at 10.06.36

So, at the beginning of the experiment I define the two variables for the number of repetitions for the two blocks, then on every go around the big blockSelectLoop, the code in the blockSelect routine sets one of the number of repetitions of each of the small block-level loops to 0, and the other to 1. Setting the number of repetitions for a loop to 0 basically means ‘skip that loop’, so one is always skipped, and the other one is always executed. The blockSelectLoop sequentially executes the conditions in the excel file, so the upshot is that this program runs block1, then block2, then block1 again, then block2 again. Now all I have to do if I want to create a different sequence of blocks is to edit the column in the conditions excel file, to produce any kind of sequence I want.

Hopefully it should be clear how to extend this very simple example to use three (or more) block/trial types. I’ve actually used this technique to program a rapid event-related experiment based on this paper, that includes about 10 different trial types, randomly presented, and it works well. I also hope that this little program is a good example of what can be achieved by using code-snippets in the builder interface; this is a tremendously powerful feature, and really extends the capabilities of the builder well beyond what’s achievable through the GUI. It’s a really good halfway step between relying completely on the builder GUI and the scaryness of working with ‘raw’ python code in the coder interface too.

If you want to download this code, run it yourself and poke it with a stick a little bit, I’ve made it available to download as a zip file with everything you need here. Annoyingly, WordPress doesn’t allow the upload of zip files to its blogs, so I had to change the file extension to .pdf; just download (right-click the link and ‘Save link as…’) and then rename the .pdf bit to .zip and it should work fine. Of course, you’ll also need to have PsychoPy installed as well. Your mileage may vary, any disasters that occur as a result of you using this program are your own fault, etc. etc. blah blah.

Happy coding! TTFN.

3D printed response box. The future. It’s here.

photo

 

This may not look like much, but it’s actually pretty cool. It’s a new five-button response-box being built for our MRI scanner by one of the technicians where I work. The cool thing is that the main chassis has been custom-designed, and then fabricated using a polyethylene-extruding 3D printer. The micro-switches for the four fingers and thumb and the wired connections for each have obviously been added afterwards.

3D printing in plastic is a great way of creating hardware for use in the MRI environment, as, well… it’s plastic, and it can create almost any kind of structure you can think of. We sometimes need to build custom bits of hardware for use in some of our experiments, and previously we’d usually build things by cutting plastic sheets or blocks to the required shape, and holding them together with plastic screws. Using a 3D-printer means we can produce solid objects which are much stronger and more robust, and they can be produced much more quickly and easily too. I love living in the future.

High-tech version of the hollow-face illusion

I just accidentally made a kind-of version of the classic hollow-face illusion, using an anatomical MRI scan of my own head and Osirix. I exported a movie in Osirix, saved the movie as an image sequence using Quicktime, and then assembled it into an animated GIF using GIMP.

MW_3D_MIP_2 001_small

Click the image for a much bigger (11mb!) version.

This is a maximum intensity projection, and because of the way MIPs work, it appears that the head is rotating 180 degrees to the left, and then the direction switches and it rotates back 180 degrees to the right. In actual fact, the image is rotating constantly in one direction, as can be seen by looking at the cube on the top left which cycles through L (Left), P (Posterior), R (Right), and A (Anterior). It’s not really a hollow-face illusion as the effect is pretty much an artefact of the MIP, but still, I thought it was cool.

Some Python resources

videosong

Since I’ve switched to using PsychoPy for programming my behavioural and fMRI experiments (and if you spend time coding experiments, I strongly suggest you check it out too, it’s brilliant) I’ve been slowly getting up to speed with the Python programming language and syntax. Even though the PsychoPy GUI ‘Builder’ interface is very powerful and user-friendly, one inevitably needs to start learning to use a bit of code in order to get the best out of the system.

Before, when people occasionally asked me questions like “What programming language should I learn?” I used to give a somewhat vague answer, and say that it depended largely on what exactly they wanted to achieve. Nowadays, I’m happy to recommend that people learn Python, for practically any purpose. There are an incredible number of libraries available that enable you to do almost anything with it, and it’s flexible and powerful enough to fit a wide variety of use cases. Many people are now using it as a free alternative to Matlab, and even using it for ‘standard’ statistical analyses too. The syntax is incredibly straight-forward and sensible; even if an individual then goes on to use a different language, I think Python is a great place to start with programming for a novice. Python seems to have been rapidly adopted by scientists, and there are some terrific resources out there for learning Python in general, and its scientific applications in particular.

For those getting started there are a number of good introductory resources. This ‘Crash Course in Python for Scientists’ is a great and fairly brief introduction which starts from first principles and doesn’t assume any prior knowledge. ‘A Non-Programmers Tutorial for Python 2.6′ is similarly introductory, but covers a bit more material. ‘Learn Python the Hard Way’ is also a well-regarded introductory course which is free to view online, but has a paid option ($29.95) which gives you access to additional PDFs and video material. The ‘official’ Python documentation is also pretty useful, and very comprehensive, and starts off at a basic level. Yet another good option is Google’s Python Classes.

For those who prefer a more interactive experience, CodeAcademy has a fantastic set of interactive tutorials which guide you through from the complete beginning, up to fairly advanced topics. PythonMonk and TryPython.org also have similar systems, and all three are completely free to access – well worth checking out.

For Neuroimagers, there are some interesting Python tools out there, or currently under development. The NIPY (Neuroimaging in Python) community site is well worth a browse. Most interestingly (to me, anyway) is the nipype package, which is a tool that provides a standard interface and workflow for several fMRI analysis packages (FSL, SPM and FreeSurfer) and facilitates interaction between them – very cool. fMRI people might also be very interested in the PyMVPA project which has implemented various Multivariate Pattern Analysis algorithms.

People who want to do some 3D programming for game-like interfaces or experimental tasks will also want to check out VPython (“3D Programming for ordinary mortals”!).

Finally, those readers who are invested in the Apple ecosystem and own an iPhone/iPad will definitely want to check out Pythonista – a full featured development environment for iOS, with a lot of cool features, including exporting directly to XCode (thanks to @aechase for pointing this one out on Twitter). There looks to be a similar app called QPython for Android, though it’s probably not as full-featured; if you’re an Android user, you’re probably fairly used to dealing with that kind of disappointment though. ;o)

Anything I’ve missed? Let me know in the comments and I’ll update the post.

Toodles.

Comment on the Button et al. (2013) neuroscience ‘power-failure’ article in NRN

Statistical Spidey knows the score.

Statistical Spidey knows the score.

An article was published in Nature Reviews Neuroscience yesterday which caused a bit of a stir among neuroscientists (or at least among neuroscientists on Twitter, anyway). The authors cleverly used meta-analytic papers to estimate the ‘true’ power of an effect, and then (using the G*Power software) calculated the power for each individual study that made up the meta-analysis, based on the sample size of each one. Their conclusions are pretty damning for the field as a whole: an overall value of 21%, dropping to 8% in some sub-fields. This means that out of 100 studies that are conducted into a genuine effect, only 21 will actually demonstrate it.

The article has been discussed and summarised at length by Ed Yong, Christian Jarrett, and by Kate Button (the study’s first author) on Suzy Gage’s Guardian blog, so I’m not going to re-hash it any more here. The original paper is actually very accessible and well-written, and I encourage interested readers to start there. It’s definitely an important contribution to the debate, however (as always) there are alternative perspectives. I generally have a problem with over-reliance on power analyses (they’re often required for grant applications, and other project proposals). Prospective power analyses (i.e. those conducted before a piece of research is conducted, in order to tell you how many subjects you need) use an estimate of the effect size you expect to achieve – usually derived from previous work that has examined a (broadly) similar problem using (broadly) similar methods. This estimate is essentially a wild shot in the dark (especially because of some of the issues and biases discussed by Button et al., that are likely to operate in the literature), and the resulting power analysis therefore tells you (in my opinion) nothing very useful. Button et al. get around this issue by using the effect size from meta-analyses to estimate the ‘true’ effect size in a given literature area – a neat trick.

The remainder of this post deals with power-issues in fMRI, since it’s my area of expertise, and necessarily gets a bit technical. Readers who don’t have a somewhat nerdy interest in fMRI-methods are advised to check out some of the more accessible summaries linked to above. Braver readers – press on!

An alternative approach used in the fMRI field, and one that I’ve been following when planning projects for years, is a more empirical method. Murphy and Garavan (2004) took a large sample of 58 subjects who had completed a Go/No-Go task and analysed sub-sets of different sizes to look at the reproducibility of the results, with different sample sizes. They showed that reproducibility (assessed by correlation of the statistical maps with the ‘gold standard’ of the entire dataset; Fig. 4) reaches 80% at about 24 or 25 subjects. By this criterion, many fMRI studies are underpowered.

While I like this empirical approach to the issue, there are of course caveats and other things to consider. fMRI is a complex, highly technical research area, and heavily influenced by the advance of technology. MRI scanners have significantly improved in the last ten years, with 32 or even 64-channel head-coils becoming common, faster gradient switching, shorter TRs, higher field strength, and better field/data stability all meaning that the signal-to-noise has improved considerably. This serves to cut down one source of noise in fMRI data – intra-subject variance. The inter-subject variance of course remains the same as it always was, but that’s something that can’t really be mitigated against, and may even be of interest in some (between-group) studies. On the analysis side, new multivariate methods are much more sensitive to detecting differences than the standard mass-univariate approach. This improvement in effective SNR means that the Murphy and Garavan (2004) estimate of 25 subjects for 80% reproducibility may be somewhat inflated, and with modern techniques one could perhaps get away with less.

The other issue with the Murphy and Garavan (2004) approach is that it’s not very generalisable. The Go/No-Go task is widely used and is a ‘standard’ cognitive/attentional task that activates a well-described brain network, but other tasks may produce more or less activation, in different brain regions. Signal-to-noise varies widely across the brain, and across task-paradigms, with simple visual or motor experiments producing very large signal changes and complex cognitive tasks smaller ones. Yet another factor is the experimental design (blocked stimuli, or event-related),  the overall number of trials/stimuli presented, and the total scanning time for each subject, all of which can vary widely.

The upshot is that there are no easy answers, and this is something I try to impress upon people at every opportunity; particularly the statisticians who read my project proposals and object to me not including power analyses. I think prospective power analyses are not only uninformative, but give a false sense of security, and for that reason should be treated with caution. Ultimately the decision about how many subjects to test is generally highly influenced by other factors anyway (most notably, time, and money). You should test as many subjects as you reasonably can, and regard power analysis results as, at best, a rough guide.

Warhol brains.

Here is a pretty Warhol-esque picture I made using a) my own head, b) a Siemens Verio MRI scanner, c) Osirix and d) GIMP.

(Clicky for bigness)

Free, interactive MRI courses from Imaios.com (plus lots of other medical/anatomy material too)

A very quick post to point you towards a really fantastic set of online, interactive courses on MRI from a website called Imaios.com – a very nice, very slick set of material. The MRI courses are all free, but you’ll need to register to see the animations. Lots of other medical/anatomy-related courses on the site too – some free, some ‘premium’, and some nice looking mobile apps too.

Whither forensic psychology software?

Good nutrition's given you some length of bone.

Good nutrition’s given you some length of bone.

Forensic and criminal psychology are somewhat odd disciplines; they sit at the cross-roads between abnormal psychology, law, criminology, and sociology.  Students seem to love forensic psychology courses, and the number of books, movies, and TV shows which feature psychologists cooperating with police (usually in some kind of offender-profiling manner) attests to the fascination that  the general public have for it too. Within hours of the Newtown, CT shooting spree last December, ‘expert’ psychologists were being recruited by the news media to deliver soundbites attesting to the probable mental state of the perpetrator. Whether this kind of armchair diagnosis is appropriate or useful (hint: it’s really not), it’s a testament to the acceptance of such ideas within society at large.

Back in the late 80s and early 90s there were two opposing approaches to offender profiling, rather neatly personified by American and British practitioners. A ‘top-down’ (or deductive) approach was developed by the FBI Behavioral Sciences Unit, and involved interviewing convicted offenders, attempting to derive (somewhat subjective) general principles in order to ‘think like a criminal’. By contrast, the British approach (developed principally by David Canter and colleagues) took a much more ‘bottom-up’ (or inductive) approach focused on empirical research, and more precisely quantifiable aspects of criminal behaviour.

Interestingly, the latter approach was ideally suited to standardised analysis methods, and duly spawned a number of computer-based tools. The most prominent among them was a spatial/geographical profiling tool, developed by Canter’s Centre for Investigative Psychology, and named ‘Dragnet’. The idea behind it was relatively simple – that the most likely location of the residence of a perpetrator of a number of similar crimes could be deduced from the locations of the crimes themselves. For example, a burglar doesn’t tend to rob his next-door neighbours, nor does he tend to travel too far from familiar locations to ply his trade – he commits burglaries at a medium distance from home, and generally roughly the same distance. Also general caution might prevent him from returning to the same exact location twice, so an idealised pattern of burglary might include a central point (the perpetrators home) with a number of crime locations forming the points of a circle around it. For an investigator, of course the location of the central point isn’t known a priori, however it can easily be deduced simply by looking at the size and shape of the circle.

geographicprofiling

In practice of course, it’s never this neat, but  modern techniques incorporate various other features (terrain, social geography, etc.) to build statistical models and have met with some success. Ex-police officer Kim Rossmo has been the leading figure in geographic profiling in recent years, and founded the Center for Geospatial Intelligence and Investigation at Texas State university.

Software like this seems like it should be useful, but by and large has failed to deliver on its promises in a major way. At one point it was thought that the future police service would incorporate these tools (and others) routinely in order to solve, and perhaps even predict, crimes. With the sheer amount and richness of data available on the general populace (through online search histories, social networking sites, insurance company/credit card databases, CCTV images, mobile-phone histories, licence-plate-reading traffic cameras, etc. etc.) and on urban environments (e.g. Google maps) that crime-solving software would now be highly developed, and use all these sources of information. However, it seems to have largely stalled in recent years; the Centre for Investigative Psychology’s website has clearly not been updated in several years, and it seems no-one has even bothered producing versions of their software for modern operating systems.

Some others seem to be pursuing similar ideas with more modern methods (e.g. this company), yet still we’re nowhere near any kind of system like the (fictional) one portrayed in the TV series ‘Person of Interest‘, which can predict crimes by analysis of CCTV footage and behaviour patterns derived therefrom. Whether or not this will ever be possible, there is certainly relevant data out there, freely accessible to law-enforcement agencies; the issue is building the right kind of data-mining algorithms to make sense of it all – clearly, not a trivial endeavour.

Something that will undoubtedly help, is the fairly recent development of pretty sophisticated facial recognition technology. Crude face-recognition technology is now embedded in most modern digital cameras, can be used as ID-verification (i.e. instead of a passcode) to unlock smartphones, and  is used for ‘tagging’ pictures on websites like Facebook and Flickr. Researchers have been rapidly refining the techniques, including some very impressive methods of generating interpolated high-resolution images from low-quality sources (this paper describes an impressive ‘face hallucination’ method; PDF here). These advancements, while impressive, are essentially a somewhat dry problem in computer vision; there’s no real ‘psychology’ involved here.

'Face hallucination' -  Creating high quality face images from low-resolution inputs, by using algorithms with prior information about typical facial features.

‘Face hallucination’ – Creating high quality face images from low-resolution inputs, by using algorithms with prior information about typical facial features.

One other ‘growth area’ in criminal/legal psychology over the last few years has been in fMRI lie-detection. Two companies (the stupidly-or-maybe-ingeniously-named No Lie MRI, and Cephos) have been aggressively pushing for their lie-detection procedures to be introduced as admissible evidence in US courts. So far they’ve only had minor success, but frankly, it’s only a matter of time. Most serious commentators (e.g. this bunch of imaging heavy-hitters) still strike an extremely cautious tone on such technologies, but they may be fighting a losing battle.

Despite these two very technical areas then, in general, the early promise of a systematic scientific approach to forensic psychology that could be instantiated in formal systems has not been fulfilled. I’m not sure if this is because of a lack of investment, expertise, interest, or just because the problem turned out to be substantively harder to address than people originally supposed. There is an alternative explanation of course – that governments and law enforcement agencies have indeed developed sophisticated software that ties together all the major databases of personal information, integrates it with CCTV and traffic-camera footage, and produces robust models of the behaviour of the general public, both as a whole, and at an individual level. A conspiracy theorist might suppose that if such a system existed, information about it would have to be suppressed, and that’s the likely reason for the apparent lack of development in this area in recent years. Far-fetched? Maybe.

TTFN, and remember – they’re probably (not?) watching you…

 

New ‘Links’ page

Just a quick notification to say that I’ve just put up a ‘Links’ page, accessible from the top-level menu on this site, or by clicking here. There’s a couple of hundred categorised and (more or less) colour-coded links there, all more-or-less relevant to psychology and/or computing. Hope it’s useful to someone, because it took me bloody ages… ;o)

More to come on the links page as I find more stuff/get around to it.

TTFN.

How to pilot an experiment

I got a serious question for you: What the fuck are you doing? This is not shit for you to be messin’ with. Are you ready to hear something? I want you to see if this sounds familiar: any time you try a decent crime, you got fifty ways you’re gonna fuck up. If you think of twenty-five of them, then you’re a genius… and you ain’t no genius.
Body Heat (1981, Lawrence Kasdan)

To consult the statistician after an experiment is finished is often merely to ask him to conduct a post-mortem examination. He can perhaps say what the experiment died of.
R.A. Fisher (1938)

Don’t crash and burn your experiment.

Doing a pilot run of a new psychology experiment is vital. No matter how well you think you’ve designed and programmed your task, there are (almost) always things that you didn’t think of. Going ahead and spending a lot of time and effort collecting a set of data without running a proper pilot is (potentially) a recipe for disaster. Several times I’ve seen data-sets where there was some subtle issue with the data logging, or the counter-balancing, or something else, which meant that the results were, at best,  compromised, and at worst completely useless.

All of the resultant suffering, agony, and sobbing could have been avoided by running a pilot study in the right way. It’s not sufficient to run through the experimental program a couple of times; a comprehensive test of an experiment has to include a test of the analysis as well. This is particularly true of any experiment involving methods like fMRI/MEG/EEG where a poor design can lead to a data-set that’s essentially uninterpretable, or perhaps even un-analysable. You may think you’ve logged all the data variables you think you’ll need for the analysis, and your design is a work of art, but you can’t be absolutely sure unless you actually do a test of the analysis.

This might seem like over-kill, or a waste of effort, however, you’re going to have to design your analysis at some point anyway, so why not do it at the beginning? Analyse your pilot data in exactly the way you’re planning on analysing your main data, save the details (using SPSS syntax, R code, SPM batch jobs – or whatever you’re using) and when you have your ‘proper’ data set, all you’ll (in theory) have to do is plug it in to your existing analysis setup.

These are the steps I normally go through when getting a new experiment up and running. Not all will be appropriate for all experiments, your mileage may vary etc. etc.

1. Test the stimulus program. Run through it a couple of times yourself, and get a friend/colleague to do it once too, and ask for feedback. Make sure it looks like it’s doing what you think it should be doing.

2. Check the timing of the stimulus program. This is almost always essential for a fMRI experiment, but may not be desperately important for some kinds of behavioural studies. Run through it with a stopwatch (the stopwatch function on your ‘phone is probably accurate enough). If you’re doing any kind of experiment involving rapid presentation of stimuli (visual masking, RSVP paradigms) you’ll want to do some more extensive testing to make sure your stimuli are being presented in the way that you think – this might involve plugging a light-sensitive diode into an oscilloscope, sticking it to your monitor with a bit of blu-tack and measuring the waveforms produced by your stimuli. For fMRI experiments the timing is critical. Even though the Haemodynamic Response Function (HRF) is slow (and somewhat variable) you’re almost always fighting to pull enough signal out of the noise, so why introduce more? A cumulative error of only a few tens of milliseconds per trial can mean that your experiment is a few seconds out by the end of a 10 minute scan – this means that your model regressors will be way off – and your results will likely suck.*

3. Look at the behavioural data files. I don’t mean do the analysis (yet), I mean just look at the data. First make sure all the variables you want logged are actually there, then dump it into Excel and get busy with the sort function. For instance, if you have 40 trials and 20 stimuli (each presented twice) make sure that each one really is being presented twice, and not some of them once, and some of them three times; sorting by the stimulus ID should make it instantly clear what’s going on. Make sure the correct responses and any errors are being logged correctly. Make sure the counter-balancing is working correctly by sorting on appropriate variables.

4. Do the analysis. Really do it. You’re obviously not looking for any significant results from the data, you’re just trying to validate your analysis pipeline and make sure you have all the information you need to do the stats. For fMRI experiments – look at your design matrix to see that it makes sense and that you’re not getting warnings about non-orthogonality of the regressors from the software. For fMRI data using visual stimuli, you could look at some basic effects (i.e. all stimuli > baseline) to make sure you get activity in the visual cortex. Button-pushing responses should also be visible as activity in the motor cortex in a single subject too – these kinds of sanity checks can be a good indicator of data quality. If you really want to be punctilious, bang it through a quick ICA routine and see if you get a) component(s) that look stimulus-related, b) something that looks like the default-mode network, and c) any suspiciously nasty-looking noise components (a and b = good, c = bad, obviously).

5. After all that, the rest is easy. Collect your proper set of data, analyse it using the routines you developed in point 4. above, write it up, and then send it to Nature.

And that, ladeez and gennulmen, is how to do it. Doing a proper pilot can only save you time and stress in the long run, and you can go ahead with your experiment in the certain knowledge that you’ve done everything in your power to make sure your data is as good as it can possibly be. Of course, it still might be total and utter crap, but that’ll probably be your participants’ fault, not yours.

Happy piloting! TTFN.

*Making sure your responses are being logged with a reasonable level of accuracy is also pretty important for many experiments, although this is a little harder to objectively verify. Hopefully if you’re using some reasonably well-validated piece of software and decent response device you shouldn’t have too many problems.