Monthly Archives: November 2012

New ‘Links’ page

Just a quick notification to say that I’ve just put up a ‘Links’ page, accessible from the top-level menu on this site, or by clicking here. There’s a couple of hundred categorised and (more or less) colour-coded links there, all more-or-less relevant to psychology and/or computing. Hope it’s useful to someone, because it took me bloody ages… ;o)

More to come on the links page as I find more stuff/get around to it.

TTFN.

The Adventures of Ned the Neuron – an interactive, educational ebook.

Y’know… for kids!

A quick post to point you to something that looks like a serious case of the funsies. It’s an interactive ebook that’s just been released detailing the adventures of Ned the Neuron – a proper story-book, but with three interactive games built in, all with the aim of teaching kids about basic neuroscience. It’s produced by Kizoom Labs, which was co-founded by Jessica Voytek (one of the developers, along with her husband Brad, of the excellent brainSCANr site).

You can read more about the book on the Kizoom site here, or download it (iPad only) from the iOS App Store here.

TTFN.

Programming experiments using PsychoPy – first impressions

Links to PsychoPy websiteI wrote a tiny post about PsychoPy a little while ago and it’s something I’ve been meaning to come back to since then. I’ve recently been tasked with an interesting problem; I need an experimental task for a bunch of undergrads to use in a ‘field study’ – something that they can run on their personal laptops, and test people in naturalistic environments (i.e. the participants’ homes). The task is based on a recent paper (Rezlescu et al., 2012) in PLoS One, and involves presenting face stimuli that vary in facial characteristics associated with trustworthiness, in a ‘game’ where the participant  plays the role of an investor and has to decide how much money they would invest in each person’s business. I was actually given a version of the experiment programmed (by someone else) in Matlab using the Psychtoolbox system. However, using this version seemed impractical for a number of reasons; firstly Matlab licences are expensive and getting a licenced version of Matlab on every student’s computer would have blown the budget available. Secondly, in my (admittedly, limited) experience with Matlab and Psychtoolbox, I’ve always found it to be a little… sensitive. What I mean is that whenever I’ve tried to transfer a (working) program onto another computer, I’ve generally run into trouble. Either the timing goes to hell, or  a different version of Matlab/Psychtoolbox is needed, or (in the worst cases) the program just crashes and needs debugging all over again. I could foresee getting this Matlab code working well on every single students’ laptop would be fraught with issues – some of them might be using OS X, and some might be using various versions of Windows – this is definitely going to cause problems.*

Somewhat counterintuitively therefore, I decided that the easiest thing to do was start from scratch and re-create the experiment using something else entirely. Since PsychoPy is a) entirely free, b) cross-platform (meaning it should work on any OS), and c) something I’d been meaning to look at seriously for a while anyway, it seemed like a good idea to try it out.

I’m happy to report it’s generally worked out pretty well. Despite being a complete novice with PsychoPy, and indeed the Python programming language, I managed to knock something reasonably decent together within a few hours. At times it was frustrating, but that’s always the case when programming experiments (at least, it’s always the case for a pretty rubbish programmer like me, anyway).

So, there are two separate modules to PsychoPy – the ‘Builder’ and the ‘Coder’. Since I’m a complete novice with Python, I steered clear of the Coder view, and pretty much used the Builder, which is a really nice graphical interface where experiments can be built up from modules (or ‘routines’) and flow parameters (i.e. ‘loop through X number of trials’) can be added. Here’s a view of the Builder with the main components labelled (clicky for bigness):


At the bottom is the Flow panel, where you add new routines or loops into your program. The large main Routine panel shows a set of tabs (one for each of your routines) where the events that occur in each of the routines can be defined on a timeline-style layout. At the right is a panel containing a list of stimuli (pictures, videos, random-dot-kinematograms, gratings etc.) and response types (keyboard, mouse, rating scales) that can be added to the routines. Once a stimulus or response is added to a routine, a properties box pops up which allows you to modify basic  (e.g. position, size, and colour of text) and some advanced (through the ‘modify everything’ field in some of the dialog boxes) characteristics.

It seems like it would be perfectly possible to build some basic kinds of experiments (e.g. a Stroop task) through the builder without ever having to look at any Python code. However, one of the really powerful features of the Builder interface is the ability to insert custom code snippets (using the ‘code’ component). These can be set to execute at the beginning or end of the experiment, routine, or on every frame. This aspect of the Builder really extends its capabilities and makes it a much more flexible, general-purpose tool. Even though I’m not that familiar with Python syntax, I was fairly easily able to get some if/else functions incorporating random number generation that calculated the amount returned to the investor on a trial, and to use those variables to display post-trial feedback. Clearly a bit of familiarity with the basics of programming logic is important to use these functions though.

This brings me to the Coder view – at any point the ‘Compile Script’ button in the toolbar can be pushed, which opens up the Coder view and displays a script derived from the current Builder view. The experiment can then be run either from the Builder or the Coder. I have to admit, I didn’t quite understand the relationship between the two at first –  I was under the impression that these were two views of the same set of underlying data, and changes in either one would be reflected in the other (a bit like the dual-view mode of HTML editors like Dreamweaver) but it turns out that’s not the case, and in fact, once I thought about it, that would be very difficult to implement with a ‘proper’ compiled language like Python. So, a script can be generated from the Builder, and the experiment can then be run from that script, however, changes made to it can not be propagated back to the Builder view. This means that unless you’re a serious Python ninja, you’re probably going to be doing most of the work in the Builder view. The Coder view is really good for debugging and working out how things fit together though – Python is (rightly) regarded as one of the most easily human-readable languages and if you’ve got a bit of experience with almost any other language, you shouldn’t find it too much of a problem to work out what’s going on.

Another nice feature is the ability of the ‘loop’ functions to read in the data it needs for each repeat of the loop (e.g. condition codes, text to be presented, picture filenames, etc.) from a plain text (comma separated) file or Excel sheet. Column headers in the input file become variables in the program and can then be referenced from other components. Data is also saved by default in the same two file formats – .csv and .xls. Finally, the PsychoPy installation comes with a set of nine pre-built demo experiments which range from the basic (Stroop) to more advanced ones (BART) which involve a few custom code elements.

There’s a couple of features that it doesn’t have which I think would be really useful – in particular in the Builder view it would be great if individual components could be copied and pasted between different routines. I found myself adding in a number of text elements and it was a bit laborious to go through them all and change the font, size, position etc. on each one so they were all the same. Of course ‘proper’ programmers working in the Coder view would be able to copy/paste these things very easily…

So, I like PsychoPy; I really do. I liked it even more when I transferred my program (written on a MacBook Air running OS X 10.8) onto a creaky old Windows XP desktop and it ran absolutely perfectly, first time. Amazing! I’m having a little bit of trouble getting it running well on a Windows Vista laptop (the program runs slowly and has some odd-looking artefacts on some of the pictures) but I’m pretty sure that’s an issue with the drivers for the graphics card and can be relatively easily fixed. Of course, Vista sucks, that could be the reason too.

So, I’d recommend PsychoPy to pretty much anybody – the Builder view makes it easy for novices to get started, and the code components and Coder view means it should keep seasoned code-warriors happy too. Plus, the holy trinity of being totally free, open-source, and cross-platform are huge advantages. I will definitely be using it again in future projects, and recommending it to students who want to learn this kind of thing.

Happy experimenting! TTFN.

*I don’t mean to unduly knock Matlab and/or Psychtoolbox – they’re both fantastically powerful and useful for some applications.

How to pilot an experiment

I got a serious question for you: What the fuck are you doing? This is not shit for you to be messin’ with. Are you ready to hear something? I want you to see if this sounds familiar: any time you try a decent crime, you got fifty ways you’re gonna fuck up. If you think of twenty-five of them, then you’re a genius… and you ain’t no genius.
Body Heat (1981, Lawrence Kasdan)

To consult the statistician after an experiment is finished is often merely to ask him to conduct a post-mortem examination. He can perhaps say what the experiment died of.
R.A. Fisher (1938)

Don’t crash and burn your experiment.

Doing a pilot run of a new psychology experiment is vital. No matter how well you think you’ve designed and programmed your task, there are (almost) always things that you didn’t think of. Going ahead and spending a lot of time and effort collecting a set of data without running a proper pilot is (potentially) a recipe for disaster. Several times I’ve seen data-sets where there was some subtle issue with the data logging, or the counter-balancing, or something else, which meant that the results were, at best,  compromised, and at worst completely useless.

All of the resultant suffering, agony, and sobbing could have been avoided by running a pilot study in the right way. It’s not sufficient to run through the experimental program a couple of times; a comprehensive test of an experiment has to include a test of the analysis as well. This is particularly true of any experiment involving methods like fMRI/MEG/EEG where a poor design can lead to a data-set that’s essentially uninterpretable, or perhaps even un-analysable. You may think you’ve logged all the data variables you think you’ll need for the analysis, and your design is a work of art, but you can’t be absolutely sure unless you actually do a test of the analysis.

This might seem like over-kill, or a waste of effort, however, you’re going to have to design your analysis at some point anyway, so why not do it at the beginning? Analyse your pilot data in exactly the way you’re planning on analysing your main data, save the details (using SPSS syntax, R code, SPM batch jobs – or whatever you’re using) and when you have your ‘proper’ data set, all you’ll (in theory) have to do is plug it in to your existing analysis setup.

These are the steps I normally go through when getting a new experiment up and running. Not all will be appropriate for all experiments, your mileage may vary etc. etc.

1. Test the stimulus program. Run through it a couple of times yourself, and get a friend/colleague to do it once too, and ask for feedback. Make sure it looks like it’s doing what you think it should be doing.

2. Check the timing of the stimulus program. This is almost always essential for a fMRI experiment, but may not be desperately important for some kinds of behavioural studies. Run through it with a stopwatch (the stopwatch function on your ‘phone is probably accurate enough). If you’re doing any kind of experiment involving rapid presentation of stimuli (visual masking, RSVP paradigms) you’ll want to do some more extensive testing to make sure your stimuli are being presented in the way that you think – this might involve plugging a light-sensitive diode into an oscilloscope, sticking it to your monitor with a bit of blu-tack and measuring the waveforms produced by your stimuli. For fMRI experiments the timing is critical. Even though the Haemodynamic Response Function (HRF) is slow (and somewhat variable) you’re almost always fighting to pull enough signal out of the noise, so why introduce more? A cumulative error of only a few tens of milliseconds per trial can mean that your experiment is a few seconds out by the end of a 10 minute scan – this means that your model regressors will be way off – and your results will likely suck.*

3. Look at the behavioural data files. I don’t mean do the analysis (yet), I mean just look at the data. First make sure all the variables you want logged are actually there, then dump it into Excel and get busy with the sort function. For instance, if you have 40 trials and 20 stimuli (each presented twice) make sure that each one really is being presented twice, and not some of them once, and some of them three times; sorting by the stimulus ID should make it instantly clear what’s going on. Make sure the correct responses and any errors are being logged correctly. Make sure the counter-balancing is working correctly by sorting on appropriate variables.

4. Do the analysis. Really do it. You’re obviously not looking for any significant results from the data, you’re just trying to validate your analysis pipeline and make sure you have all the information you need to do the stats. For fMRI experiments – look at your design matrix to see that it makes sense and that you’re not getting warnings about non-orthogonality of the regressors from the software. For fMRI data using visual stimuli, you could look at some basic effects (i.e. all stimuli > baseline) to make sure you get activity in the visual cortex. Button-pushing responses should also be visible as activity in the motor cortex in a single subject too – these kinds of sanity checks can be a good indicator of data quality. If you really want to be punctilious, bang it through a quick ICA routine and see if you get a) component(s) that look stimulus-related, b) something that looks like the default-mode network, and c) any suspiciously nasty-looking noise components (a and b = good, c = bad, obviously).

5. After all that, the rest is easy. Collect your proper set of data, analyse it using the routines you developed in point 4. above, write it up, and then send it to Nature.

And that, ladeez and gennulmen, is how to do it. Doing a proper pilot can only save you time and stress in the long run, and you can go ahead with your experiment in the certain knowledge that you’ve done everything in your power to make sure your data is as good as it can possibly be. Of course, it still might be total and utter crap, but that’ll probably be your participants’ fault, not yours.

Happy piloting! TTFN.

*Making sure your responses are being logged with a reasonable level of accuracy is also pretty important for many experiments, although this is a little harder to objectively verify. Hopefully if you’re using some reasonably well-validated piece of software and decent response device you shouldn’t have too many problems.