Blog Archives

How to hack conditional branching in the PsychoPy builder

Regular readers will know that I’m a big fan of PsychoPy, which (for non-regular readers; *tsk*) is a piece of free, open-source software for designing and programming experiments, built on the Python language. I’ve been using it a lot recently, and I’m happy to report my initial ardour for it is still lambently undimmed.

The PsychoPy ‘builder’ interface (a generally brilliant, friendly, GUI front-end) does have one pretty substantial drawback though; it doesn’t support conditional branching. In programming logic, a ‘branch’ is a point in a program which causes the computer to start executing a different set of instructions. A ‘conditional branch’ is where the computer decides what to do out of two of more alternatives (i.e. which branch to follow) based on some value or ‘condition’. Essentially the program says, ‘if A is true: do X, otherwise (or ‘else’ in programming jargon) if B is true; do Y’. One common use of conditional branching in psychology experiments is to repeat trials that the subject got incorrect; for instance, one might want one’s subjects to achieve 90% correct on a block of trials before they continue to the next one, so the program would have something in it which said ‘if (correct trials > 90%); then continue to the next block, else if (correct trials < 90%); repeat the incorrect trials’.

At the bottom of the PsychoPy builder is a time-line graphic (the ‘flow panel’) which shows the parts of the experiment:

PsychoPy_builderThe experiment proceeds from left to right, and each part of the flow panel is executed in turn. The loops around parts of the flow panel indicate that the bits inside the loops are run multiple times (i.e. they’re the trial blocks). This is an extremely powerful interface, but there’s no option to ‘skip’ part of the flow diagram – everything is run in the order in which it appears from left-to-right.

This is a slight issue for programming fMRI experiments that use block-designs. In block-design experiments, typically two (or more) blocks of about 15-20 seconds are alternated. They might be a ‘rest’ block (no stimuli) alternated with a visual stimulus, or two different kinds of stimuli, say, household objects vs. faces. For a simple two-condition-alternating experiment one could just produce routines for two conditions, and throw a loop around them for as many repeats as needed. The problem arises when there are more than two block-types in your experiment, and you want to randomise them (i.e. have a sequence which goes ABCBCACAB…etc.). There’s no easy way of doing this in the builder. In an experiment lasting 10 minutes one might have 40 15-second blocks, and the only way to produce the (psuedo-random) sequence you want is with 40 separate elements in the flow panel that all executed one-by-one (with no loops). Building such a task would be very tedious, and more importantly, crashingly inelegant. Furthermore, you probably wouldn’t want to use the same sequence for every participant, so you’d have to laboriously build different versions with different sequences of the blocks. There’s a good reason for why this kind of functionality hasn’t been implemented; it would make the builder interface much more complicated and the PsychoPy developers are (rightly) concerned with keeping the builder as clean and simple as possible.

Fortunately, there’s an easy little hack which was actually suggested by Jon Peirce (and others) on the PsychoPy users forum. You can in fact get PsychoPy to ‘skip’ routines in the flow panel, by the use of loops, and a tiny bit of coding magic. I thought it was worth elaborating the solution on here somewhat, and I even created a simple little demo program which you can download and peruse/modify.

So, this is how it works. I’ve set up my flow panel like this:

flow

So, there are two blocks, each of which have their own loop, a ‘blockSelect’ routine, and a ‘blockSelectLoop’ enclosing the whole thing. The two blocks can contain any kind of (different) stimulus element; one could have pictures, and one could have sounds, for instance – I’ve just put some simple text in each one for demo purposes. The two block-level loops have no condition files associated with them, but in the ‘nReps’ field of their properties box I’ve put a variable ‘nRepsblock1’ for block1 and ‘nRepsblock2’ for block2. This tells the program how many times to go around that loop. The values of these variables are set by the blockSelect routine which contains a code element, which looks like this:

Screen Shot 2013-11-12 at 09.58.02

The full code in the ‘Begin Routine’ box above is this:

if selectBlock==1:
 nRepsblock1=1
 nRepsblock2=0
elif selectBlock==2:
 nRepsblock1=0
 nRepsblock2=1

This is a conditional branching statement which says ‘if selectBlock=1, do X, else if selectBlock=2, do Y’. The variable ‘selectBlock’ is derived from the conditions file (an excel workbook) for the blockSelectLoop, which is very simple and looks like this:

Screen Shot 2013-11-12 at 10.06.36

So, at the beginning of the experiment I define the two variables for the number of repetitions for the two blocks, then on every go around the big blockSelectLoop, the code in the blockSelect routine sets one of the number of repetitions of each of the small block-level loops to 0, and the other to 1. Setting the number of repetitions for a loop to 0 basically means ‘skip that loop’, so one is always skipped, and the other one is always executed. The blockSelectLoop sequentially executes the conditions in the excel file, so the upshot is that this program runs block1, then block2, then block1 again, then block2 again. Now all I have to do if I want to create a different sequence of blocks is to edit the column in the conditions excel file, to produce any kind of sequence I want.

Hopefully it should be clear how to extend this very simple example to use three (or more) block/trial types. I’ve actually used this technique to program a rapid event-related experiment based on this paper, that includes about 10 different trial types, randomly presented, and it works well. I also hope that this little program is a good example of what can be achieved by using code-snippets in the builder interface; this is a tremendously powerful feature, and really extends the capabilities of the builder well beyond what’s achievable through the GUI. It’s a really good halfway step between relying completely on the builder GUI and the scaryness of working with ‘raw’ python code in the coder interface too.

If you want to download this code, run it yourself and poke it with a stick a little bit, I’ve made it available to download as a zip file with everything you need here. Annoyingly, WordPress doesn’t allow the upload of zip files to its blogs, so I had to change the file extension to .pdf; just download (right-click the link and ‘Save link as…’) and then rename the .pdf bit to .zip and it should work fine. Of course, you’ll also need to have PsychoPy installed as well. Your mileage may vary, any disasters that occur as a result of you using this program are your own fault, etc. etc. blah blah.

Happy coding! TTFN.

Some notes on the use of voice keys in reaction time experiments

prod_MLE1312web
Cedrus SV-1 voice key device

Somebody asked me about using a voice key device the other day, and I realised it’s not something I’d ever addressed on here. A voice key is often used in experiments where you need to obtain a vocal response time, for instance in a vocal Stroop experiment, or a picture-naming task.

There are broadly two ways of doing this. The first is easy, but expensive, and not very good. The second is time-consuming, but cheap and very reliable.

The first method involves using a bit of dedicated hardware, essentially a microphone pre-amp, which detects the onset of a vocal response, and sends out a signal when it  occurs. The Cedrus SV-1 device pictured above is a good example. This is easy, because you have all your vocal reaction times logged for you, but not totally reliable because you have to pre-set a loudness threshold for the box, and it might miss some responses, if the person just talks quietly, or there’s some unexpected background noise. It should be relatively simple to get whatever stimulus software you’re running to recognise the input from the device and log it as a response.

The other way is very simple to set up, in that you just plug a microphone into the sound card of your stimulus computer and record the vocal responses on each trial as .wav files. Stimulus software like PsychoPy can do this very easily. The downside to this is that you then have to take those sound files and examine them in some way in order to get the reaction time data out – this could mean literally examining the waveforms for each trial in a sound editor (such as Audacity), putting markers on the start of the speech manually, and calculating vocal RTs relative to the start of the file/trial. This is very reliable and precise, but obviously reasonably time-consuming. Manually putting markers on sound files is still the ‘gold standard’ for voice-onset reaction times. Ideally, you should get someone else to do this for you, so they’ll be ‘blind’ to which trials are which, and unbiased in calculating the reaction times. You can also possibly automate the process using a bit of software called SayWhen (paper here).

Example of a speech waveform, viewed in Audacity

Example of a speech waveform, viewed in Audacity

Which method is best depends largely on the number of trials you have in your experiment. The second method is definitely superior (and cheaper, easier to set up) but if you have eleventy-billion trials in your experiment, manually examining them all post hoc may not be very practical, and a more automatic solution might be worthwhile. If you were really clever you could try and do both at once – have two computers set up, the first running the stimulus program, and the second recording the voice responses, but also running a bit of code that signals the first computer when it detects a voice onset. Might be tricky to set up and get working, but once it was, you’d have all your RTs logged automatically on the first computer, plus the .wav files recorded on the second for post hoc analysis/data-cleaning/error-checking etc. if necessary.

Happy vocalising!

***UPDATE***

Two researchers have pointed out in the comments, that a system for automatically generating response times from sound-files already exists, called CheckVocal. It seems to be designed to work with the DMDX experimental programming system (free software that uses Microsoft’s DirectX system to present stimuli). Not sure if it’ll work with other systems or not, but worth looking at… Have also added the information to my Links page.

MakeHuman – free, open-source software for 3D-modelling of humans

It’s always exciting when you find a new piece of cool software to play with, and even more so when what you’ve found is totally free, open-source, and available on all platforms. So it is with MakeHuman – an utterly awesome bit of kit. I wrote a piece before about FaceGen, which is also pretty cool, but MakeHuman takes it to the next level, by modelling all kinds of body characteristics as well as faces, and, of course, doing it all for free.

I’ve just downloaded it and played with it for a few minutes, but I’m already impressed by the range of options available. Through a very simple slider and radio-button based interface you have very fine control over all kinds of variables, including gender, weight, age, height, and many more, with endless fine tweak-ability possible of body and face if you dig through the options. There are also basic libraries of clothes and poses included. Here’s a well, a human, I made in just a couple of minutes:

Screen Shot 2013-08-08 at 09.45.18

And here’s a close-up of the face, after I added some hair and gave him a nasty expression:

Screen Shot 2013-08-08 at 09.47.05

Pretty cool indeed. This could potentially be a massively useful tool for people interested in face/body perception – using this, one could generate a large number of highly-controlled experimental stimuli that just differ in one aspect (say, weight, or race… whatever) very easily and quickly. Download it and have a play around!

Links page update

Just posted a fairly major update to my links page, including new sections on Neuropsychological/Cognitive testing, Neuromarketing/research businesses, and Academic conferences and organisations, plus lots of other links added to the existing sections, and occasional sprinkles of extra-bonus-added sarcasm throughout. Yay! Have fun people.

Website of the week: Cogsci.nl. OpenSesame, illusions, online experiments, and more.

A quick post to point you towards a great website with a lot of really cool content (if you’re into that kind of thing, which if you’re reading this blog, then I assume you probably are… anyway, I digress; I apologise, it was my lab’s Christmas party last night and I’m in a somewhat rambling mood. Anyway, back to the point).

So, the website is called cogsci.nl, and is run by a post-doc at the University of Aix-Marseille called  Sebastiaan Mathôt. It’s notable in that it’s the homepage of OpenSesame –  a very nice-looking, Python-based graphical experiment builder that I’ve mentioned before on these very pages. There’s a lot of other cool stuff on the site though, including more software (featuring a really cool online tool for instantly creating Gabor patch stimuli), a list of links to stimulus sets, and a selection of really-cool optical illusions. Really worth spending 20 minutes of your time poking around a little and seeing what’s there.

I’ll leave you with a video of Sebastiaan demonstrating an experimental program, written in his OpenSesame system, running on a Google Nexus 7 Tablet (using Ubuntu linux as an OS). The future! It’s here!

Programming experiments using PsychoPy – first impressions

Links to PsychoPy websiteI wrote a tiny post about PsychoPy a little while ago and it’s something I’ve been meaning to come back to since then. I’ve recently been tasked with an interesting problem; I need an experimental task for a bunch of undergrads to use in a ‘field study’ – something that they can run on their personal laptops, and test people in naturalistic environments (i.e. the participants’ homes). The task is based on a recent paper (Rezlescu et al., 2012) in PLoS One, and involves presenting face stimuli that vary in facial characteristics associated with trustworthiness, in a ‘game’ where the participant  plays the role of an investor and has to decide how much money they would invest in each person’s business. I was actually given a version of the experiment programmed (by someone else) in Matlab using the Psychtoolbox system. However, using this version seemed impractical for a number of reasons; firstly Matlab licences are expensive and getting a licenced version of Matlab on every student’s computer would have blown the budget available. Secondly, in my (admittedly, limited) experience with Matlab and Psychtoolbox, I’ve always found it to be a little… sensitive. What I mean is that whenever I’ve tried to transfer a (working) program onto another computer, I’ve generally run into trouble. Either the timing goes to hell, or  a different version of Matlab/Psychtoolbox is needed, or (in the worst cases) the program just crashes and needs debugging all over again. I could foresee getting this Matlab code working well on every single students’ laptop would be fraught with issues – some of them might be using OS X, and some might be using various versions of Windows – this is definitely going to cause problems.*

Somewhat counterintuitively therefore, I decided that the easiest thing to do was start from scratch and re-create the experiment using something else entirely. Since PsychoPy is a) entirely free, b) cross-platform (meaning it should work on any OS), and c) something I’d been meaning to look at seriously for a while anyway, it seemed like a good idea to try it out.

I’m happy to report it’s generally worked out pretty well. Despite being a complete novice with PsychoPy, and indeed the Python programming language, I managed to knock something reasonably decent together within a few hours. At times it was frustrating, but that’s always the case when programming experiments (at least, it’s always the case for a pretty rubbish programmer like me, anyway).

So, there are two separate modules to PsychoPy – the ‘Builder’ and the ‘Coder’. Since I’m a complete novice with Python, I steered clear of the Coder view, and pretty much used the Builder, which is a really nice graphical interface where experiments can be built up from modules (or ‘routines’) and flow parameters (i.e. ‘loop through X number of trials’) can be added. Here’s a view of the Builder with the main components labelled (clicky for bigness):


At the bottom is the Flow panel, where you add new routines or loops into your program. The large main Routine panel shows a set of tabs (one for each of your routines) where the events that occur in each of the routines can be defined on a timeline-style layout. At the right is a panel containing a list of stimuli (pictures, videos, random-dot-kinematograms, gratings etc.) and response types (keyboard, mouse, rating scales) that can be added to the routines. Once a stimulus or response is added to a routine, a properties box pops up which allows you to modify basic  (e.g. position, size, and colour of text) and some advanced (through the ‘modify everything’ field in some of the dialog boxes) characteristics.

It seems like it would be perfectly possible to build some basic kinds of experiments (e.g. a Stroop task) through the builder without ever having to look at any Python code. However, one of the really powerful features of the Builder interface is the ability to insert custom code snippets (using the ‘code’ component). These can be set to execute at the beginning or end of the experiment, routine, or on every frame. This aspect of the Builder really extends its capabilities and makes it a much more flexible, general-purpose tool. Even though I’m not that familiar with Python syntax, I was fairly easily able to get some if/else functions incorporating random number generation that calculated the amount returned to the investor on a trial, and to use those variables to display post-trial feedback. Clearly a bit of familiarity with the basics of programming logic is important to use these functions though.

This brings me to the Coder view – at any point the ‘Compile Script’ button in the toolbar can be pushed, which opens up the Coder view and displays a script derived from the current Builder view. The experiment can then be run either from the Builder or the Coder. I have to admit, I didn’t quite understand the relationship between the two at first –  I was under the impression that these were two views of the same set of underlying data, and changes in either one would be reflected in the other (a bit like the dual-view mode of HTML editors like Dreamweaver) but it turns out that’s not the case, and in fact, once I thought about it, that would be very difficult to implement with a ‘proper’ compiled language like Python. So, a script can be generated from the Builder, and the experiment can then be run from that script, however, changes made to it can not be propagated back to the Builder view. This means that unless you’re a serious Python ninja, you’re probably going to be doing most of the work in the Builder view. The Coder view is really good for debugging and working out how things fit together though – Python is (rightly) regarded as one of the most easily human-readable languages and if you’ve got a bit of experience with almost any other language, you shouldn’t find it too much of a problem to work out what’s going on.

Another nice feature is the ability of the ‘loop’ functions to read in the data it needs for each repeat of the loop (e.g. condition codes, text to be presented, picture filenames, etc.) from a plain text (comma separated) file or Excel sheet. Column headers in the input file become variables in the program and can then be referenced from other components. Data is also saved by default in the same two file formats – .csv and .xls. Finally, the PsychoPy installation comes with a set of nine pre-built demo experiments which range from the basic (Stroop) to more advanced ones (BART) which involve a few custom code elements.

There’s a couple of features that it doesn’t have which I think would be really useful – in particular in the Builder view it would be great if individual components could be copied and pasted between different routines. I found myself adding in a number of text elements and it was a bit laborious to go through them all and change the font, size, position etc. on each one so they were all the same. Of course ‘proper’ programmers working in the Coder view would be able to copy/paste these things very easily…

So, I like PsychoPy; I really do. I liked it even more when I transferred my program (written on a MacBook Air running OS X 10.8) onto a creaky old Windows XP desktop and it ran absolutely perfectly, first time. Amazing! I’m having a little bit of trouble getting it running well on a Windows Vista laptop (the program runs slowly and has some odd-looking artefacts on some of the pictures) but I’m pretty sure that’s an issue with the drivers for the graphics card and can be relatively easily fixed. Of course, Vista sucks, that could be the reason too.

So, I’d recommend PsychoPy to pretty much anybody – the Builder view makes it easy for novices to get started, and the code components and Coder view means it should keep seasoned code-warriors happy too. Plus, the holy trinity of being totally free, open-source, and cross-platform are huge advantages. I will definitely be using it again in future projects, and recommending it to students who want to learn this kind of thing.

Happy experimenting! TTFN.

*I don’t mean to unduly knock Matlab and/or Psychtoolbox – they’re both fantastically powerful and useful for some applications.

Video tutorial on designing and running psychology experiments using PsychoPy

PsychoPy is something which I’ve been meaning to write something substantive on for a while. Briefly though, it’s a system for designing and running experiments, programmed in the Python language, with a nice GUI front-end to it. I’ve only flirted with it briefly, but the open-source and cross-platform nature of it makes it a very attractive package for programming experiments, in my opinion. If I was learning this stuff for the first time, it’s definitely the system I’d use.

The purpose of this post was just to publicise a YouTube video, put up by the creator of PsychoPy – Jon Peirce, of Nottingham University. The video is a great little starter-tutorial for PsychoPy and gently walks the viewer through creating a simple experiment – great stuff.

Happy experimenting! Here’s the vid:

Image Morphing and Psychology Research – A Case Study

As an example of the ways in which technology and psychology have developed together recently, I thought it would be fun to do a little case-study of a particular area of research which has benefitted from advances in computer software over recent years. Rather than talk about the very technical disciplines like brain imaging (which have of course advanced enormously recently) I thought it would be more fun to concentrate on an area of relatively ‘pure’ psychology, and one of the most important and fundamental cognitive processes which is present pretty much from birth; face perception.

In November 1991 Michael Jackson released the single ‘Black or White’; the first to be released from his eighth album ‘Dangerous’. The single is of some significance as it marked the beginning of Jackson’s descent from the firmament of music stardom into the spiral of musical mediocrity and personal weirdness which only ended with his death in 2009, but for the purposes of the present discussion it was interesting because of part of its accompanying video. Towards the end of the video a series of people of both sexes and of various ethnic groups are shown singing along with the song and the images of their faces morph into each other in series:

Read the rest of this entry

How to Program Experiments 1: Cheating with Powerpoint

And here we finally are; it’s something I’ve been avoiding getting around to for a while, because it’s such a big and complicated topic, but to a large extent it’s the raison d’etre behind this entire blog, so I knew I’d have to roll up my sleeves and get down to it eventually. The topic I’m referring to is of course, how do you make a computer perform those nice cognitive-type psychology experiments? How do you get it to put pictures, words, or videos up on the screen, collect responses, store the data and do it all with accurate timing? How, in a nutshell, do you make a computer your all-singin’, all-dancin’, research-data-collectin’ bitch?

As I said, this is a massive topic, so before getting into specialised software and ‘proper’ experimental programming we’re going to start slowly, and we’re going to start with some techniques for using a piece of software that’s on practically every PC – Microsoft Powerpoint. Powerpoint is essentially just a program for presenting multimedia (words, pictures, video, sounds) on a screen in a nice professional way, so we can use it for presenting some simple experimental stimuli. The one thing that it won’t really do is collect input from a participant (except the standard ‘advance to the next slide’ input), which is a pretty big limitation for experimental purposes, but can be worked around.

The best ways of explaining how to use Powerpoint are by example, so I’ve created a couple of illustrative slideshows.

The first one is a simple rating task, where pictures of faces are presented, and a Visual Analogue Scale (VAS) is presented on-screen underneath each picture, like this:

Read the rest of this entry