Category Archives: Commentary

Website of the week: Cogsci.nl. OpenSesame, illusions, online experiments, and more.

A quick post to point you towards a great website with a lot of really cool content (if you’re into that kind of thing, which if you’re reading this blog, then I assume you probably are… anyway, I digress; I apologise, it was my lab’s Christmas party last night and I’m in a somewhat rambling mood. Anyway, back to the point).

So, the website is called cogsci.nl, and is run by a post-doc at the University of Aix-Marseille called  Sebastiaan Mathôt. It’s notable in that it’s the homepage of OpenSesame –  a very nice-looking, Python-based graphical experiment builder that I’ve mentioned before on these very pages. There’s a lot of other cool stuff on the site though, including more software (featuring a really cool online tool for instantly creating Gabor patch stimuli), a list of links to stimulus sets, and a selection of really-cool optical illusions. Really worth spending 20 minutes of your time poking around a little and seeing what’s there.

I’ll leave you with a video of Sebastiaan demonstrating an experimental program, written in his OpenSesame system, running on a Google Nexus 7 Tablet (using Ubuntu linux as an OS). The future! It’s here!

Advertisements

New ‘Links’ page

Just a quick notification to say that I’ve just put up a ‘Links’ page, accessible from the top-level menu on this site, or by clicking here. There’s a couple of hundred categorised and (more or less) colour-coded links there, all more-or-less relevant to psychology and/or computing. Hope it’s useful to someone, because it took me bloody ages… ;o)

More to come on the links page as I find more stuff/get around to it.

TTFN.

The Adventures of Ned the Neuron – an interactive, educational ebook.

Y’know… for kids!

A quick post to point you to something that looks like a serious case of the funsies. It’s an interactive ebook that’s just been released detailing the adventures of Ned the Neuron – a proper story-book, but with three interactive games built in, all with the aim of teaching kids about basic neuroscience. It’s produced by Kizoom Labs, which was co-founded by Jessica Voytek (one of the developers, along with her husband Brad, of the excellent brainSCANr site).

You can read more about the book on the Kizoom site here, or download it (iPad only) from the iOS App Store here.

TTFN.

Programming experiments using PsychoPy – first impressions

Links to PsychoPy websiteI wrote a tiny post about PsychoPy a little while ago and it’s something I’ve been meaning to come back to since then. I’ve recently been tasked with an interesting problem; I need an experimental task for a bunch of undergrads to use in a ‘field study’ – something that they can run on their personal laptops, and test people in naturalistic environments (i.e. the participants’ homes). The task is based on a recent paper (Rezlescu et al., 2012) in PLoS One, and involves presenting face stimuli that vary in facial characteristics associated with trustworthiness, in a ‘game’ where the participant  plays the role of an investor and has to decide how much money they would invest in each person’s business. I was actually given a version of the experiment programmed (by someone else) in Matlab using the Psychtoolbox system. However, using this version seemed impractical for a number of reasons; firstly Matlab licences are expensive and getting a licenced version of Matlab on every student’s computer would have blown the budget available. Secondly, in my (admittedly, limited) experience with Matlab and Psychtoolbox, I’ve always found it to be a little… sensitive. What I mean is that whenever I’ve tried to transfer a (working) program onto another computer, I’ve generally run into trouble. Either the timing goes to hell, or  a different version of Matlab/Psychtoolbox is needed, or (in the worst cases) the program just crashes and needs debugging all over again. I could foresee getting this Matlab code working well on every single students’ laptop would be fraught with issues – some of them might be using OS X, and some might be using various versions of Windows – this is definitely going to cause problems.*

Somewhat counterintuitively therefore, I decided that the easiest thing to do was start from scratch and re-create the experiment using something else entirely. Since PsychoPy is a) entirely free, b) cross-platform (meaning it should work on any OS), and c) something I’d been meaning to look at seriously for a while anyway, it seemed like a good idea to try it out.

I’m happy to report it’s generally worked out pretty well. Despite being a complete novice with PsychoPy, and indeed the Python programming language, I managed to knock something reasonably decent together within a few hours. At times it was frustrating, but that’s always the case when programming experiments (at least, it’s always the case for a pretty rubbish programmer like me, anyway).

So, there are two separate modules to PsychoPy – the ‘Builder’ and the ‘Coder’. Since I’m a complete novice with Python, I steered clear of the Coder view, and pretty much used the Builder, which is a really nice graphical interface where experiments can be built up from modules (or ‘routines’) and flow parameters (i.e. ‘loop through X number of trials’) can be added. Here’s a view of the Builder with the main components labelled (clicky for bigness):


At the bottom is the Flow panel, where you add new routines or loops into your program. The large main Routine panel shows a set of tabs (one for each of your routines) where the events that occur in each of the routines can be defined on a timeline-style layout. At the right is a panel containing a list of stimuli (pictures, videos, random-dot-kinematograms, gratings etc.) and response types (keyboard, mouse, rating scales) that can be added to the routines. Once a stimulus or response is added to a routine, a properties box pops up which allows you to modify basic  (e.g. position, size, and colour of text) and some advanced (through the ‘modify everything’ field in some of the dialog boxes) characteristics.

It seems like it would be perfectly possible to build some basic kinds of experiments (e.g. a Stroop task) through the builder without ever having to look at any Python code. However, one of the really powerful features of the Builder interface is the ability to insert custom code snippets (using the ‘code’ component). These can be set to execute at the beginning or end of the experiment, routine, or on every frame. This aspect of the Builder really extends its capabilities and makes it a much more flexible, general-purpose tool. Even though I’m not that familiar with Python syntax, I was fairly easily able to get some if/else functions incorporating random number generation that calculated the amount returned to the investor on a trial, and to use those variables to display post-trial feedback. Clearly a bit of familiarity with the basics of programming logic is important to use these functions though.

This brings me to the Coder view – at any point the ‘Compile Script’ button in the toolbar can be pushed, which opens up the Coder view and displays a script derived from the current Builder view. The experiment can then be run either from the Builder or the Coder. I have to admit, I didn’t quite understand the relationship between the two at first –  I was under the impression that these were two views of the same set of underlying data, and changes in either one would be reflected in the other (a bit like the dual-view mode of HTML editors like Dreamweaver) but it turns out that’s not the case, and in fact, once I thought about it, that would be very difficult to implement with a ‘proper’ compiled language like Python. So, a script can be generated from the Builder, and the experiment can then be run from that script, however, changes made to it can not be propagated back to the Builder view. This means that unless you’re a serious Python ninja, you’re probably going to be doing most of the work in the Builder view. The Coder view is really good for debugging and working out how things fit together though – Python is (rightly) regarded as one of the most easily human-readable languages and if you’ve got a bit of experience with almost any other language, you shouldn’t find it too much of a problem to work out what’s going on.

Another nice feature is the ability of the ‘loop’ functions to read in the data it needs for each repeat of the loop (e.g. condition codes, text to be presented, picture filenames, etc.) from a plain text (comma separated) file or Excel sheet. Column headers in the input file become variables in the program and can then be referenced from other components. Data is also saved by default in the same two file formats – .csv and .xls. Finally, the PsychoPy installation comes with a set of nine pre-built demo experiments which range from the basic (Stroop) to more advanced ones (BART) which involve a few custom code elements.

There’s a couple of features that it doesn’t have which I think would be really useful – in particular in the Builder view it would be great if individual components could be copied and pasted between different routines. I found myself adding in a number of text elements and it was a bit laborious to go through them all and change the font, size, position etc. on each one so they were all the same. Of course ‘proper’ programmers working in the Coder view would be able to copy/paste these things very easily…

So, I like PsychoPy; I really do. I liked it even more when I transferred my program (written on a MacBook Air running OS X 10.8) onto a creaky old Windows XP desktop and it ran absolutely perfectly, first time. Amazing! I’m having a little bit of trouble getting it running well on a Windows Vista laptop (the program runs slowly and has some odd-looking artefacts on some of the pictures) but I’m pretty sure that’s an issue with the drivers for the graphics card and can be relatively easily fixed. Of course, Vista sucks, that could be the reason too.

So, I’d recommend PsychoPy to pretty much anybody – the Builder view makes it easy for novices to get started, and the code components and Coder view means it should keep seasoned code-warriors happy too. Plus, the holy trinity of being totally free, open-source, and cross-platform are huge advantages. I will definitely be using it again in future projects, and recommending it to students who want to learn this kind of thing.

Happy experimenting! TTFN.

*I don’t mean to unduly knock Matlab and/or Psychtoolbox – they’re both fantastically powerful and useful for some applications.

How to pilot an experiment

I got a serious question for you: What the fuck are you doing? This is not shit for you to be messin’ with. Are you ready to hear something? I want you to see if this sounds familiar: any time you try a decent crime, you got fifty ways you’re gonna fuck up. If you think of twenty-five of them, then you’re a genius… and you ain’t no genius.
Body Heat (1981, Lawrence Kasdan)

To consult the statistician after an experiment is finished is often merely to ask him to conduct a post-mortem examination. He can perhaps say what the experiment died of.
R.A. Fisher (1938)

Don’t crash and burn your experiment.

Doing a pilot run of a new psychology experiment is vital. No matter how well you think you’ve designed and programmed your task, there are (almost) always things that you didn’t think of. Going ahead and spending a lot of time and effort collecting a set of data without running a proper pilot is (potentially) a recipe for disaster. Several times I’ve seen data-sets where there was some subtle issue with the data logging, or the counter-balancing, or something else, which meant that the results were, at best,  compromised, and at worst completely useless.

All of the resultant suffering, agony, and sobbing could have been avoided by running a pilot study in the right way. It’s not sufficient to run through the experimental program a couple of times; a comprehensive test of an experiment has to include a test of the analysis as well. This is particularly true of any experiment involving methods like fMRI/MEG/EEG where a poor design can lead to a data-set that’s essentially uninterpretable, or perhaps even un-analysable. You may think you’ve logged all the data variables you think you’ll need for the analysis, and your design is a work of art, but you can’t be absolutely sure unless you actually do a test of the analysis.

This might seem like over-kill, or a waste of effort, however, you’re going to have to design your analysis at some point anyway, so why not do it at the beginning? Analyse your pilot data in exactly the way you’re planning on analysing your main data, save the details (using SPSS syntax, R code, SPM batch jobs – or whatever you’re using) and when you have your ‘proper’ data set, all you’ll (in theory) have to do is plug it in to your existing analysis setup.

These are the steps I normally go through when getting a new experiment up and running. Not all will be appropriate for all experiments, your mileage may vary etc. etc.

1. Test the stimulus program. Run through it a couple of times yourself, and get a friend/colleague to do it once too, and ask for feedback. Make sure it looks like it’s doing what you think it should be doing.

2. Check the timing of the stimulus program. This is almost always essential for a fMRI experiment, but may not be desperately important for some kinds of behavioural studies. Run through it with a stopwatch (the stopwatch function on your ‘phone is probably accurate enough). If you’re doing any kind of experiment involving rapid presentation of stimuli (visual masking, RSVP paradigms) you’ll want to do some more extensive testing to make sure your stimuli are being presented in the way that you think – this might involve plugging a light-sensitive diode into an oscilloscope, sticking it to your monitor with a bit of blu-tack and measuring the waveforms produced by your stimuli. For fMRI experiments the timing is critical. Even though the Haemodynamic Response Function (HRF) is slow (and somewhat variable) you’re almost always fighting to pull enough signal out of the noise, so why introduce more? A cumulative error of only a few tens of milliseconds per trial can mean that your experiment is a few seconds out by the end of a 10 minute scan – this means that your model regressors will be way off – and your results will likely suck.*

3. Look at the behavioural data files. I don’t mean do the analysis (yet), I mean just look at the data. First make sure all the variables you want logged are actually there, then dump it into Excel and get busy with the sort function. For instance, if you have 40 trials and 20 stimuli (each presented twice) make sure that each one really is being presented twice, and not some of them once, and some of them three times; sorting by the stimulus ID should make it instantly clear what’s going on. Make sure the correct responses and any errors are being logged correctly. Make sure the counter-balancing is working correctly by sorting on appropriate variables.

4. Do the analysis. Really do it. You’re obviously not looking for any significant results from the data, you’re just trying to validate your analysis pipeline and make sure you have all the information you need to do the stats. For fMRI experiments – look at your design matrix to see that it makes sense and that you’re not getting warnings about non-orthogonality of the regressors from the software. For fMRI data using visual stimuli, you could look at some basic effects (i.e. all stimuli > baseline) to make sure you get activity in the visual cortex. Button-pushing responses should also be visible as activity in the motor cortex in a single subject too – these kinds of sanity checks can be a good indicator of data quality. If you really want to be punctilious, bang it through a quick ICA routine and see if you get a) component(s) that look stimulus-related, b) something that looks like the default-mode network, and c) any suspiciously nasty-looking noise components (a and b = good, c = bad, obviously).

5. After all that, the rest is easy. Collect your proper set of data, analyse it using the routines you developed in point 4. above, write it up, and then send it to Nature.

And that, ladeez and gennulmen, is how to do it. Doing a proper pilot can only save you time and stress in the long run, and you can go ahead with your experiment in the certain knowledge that you’ve done everything in your power to make sure your data is as good as it can possibly be. Of course, it still might be total and utter crap, but that’ll probably be your participants’ fault, not yours.

Happy piloting! TTFN.

*Making sure your responses are being logged with a reasonable level of accuracy is also pretty important for many experiments, although this is a little harder to objectively verify. Hopefully if you’re using some reasonably well-validated piece of software and decent response device you shouldn’t have too many problems.

More useful links… Open Sesame, the psychology of email, Inkscape, and others.

Another quickie post (it’s been ages since I’ve written anything substantive I know, bear with me just a little while longer…) with some links-of-interest for you.

First up is Open Sesame – this is an experiment-builder application with a nice graphical front-end, which also supports scripting in Python – nice. Looks like a possible alternative to PsychoPy with a fair few similar features. Also, it’s cross-platform, open-source and free – my three favourite things!

Next up is Inkscape – this is a free vector graphics editor (or drawing package), with similar features to Adobe Illustrator or Corel Draw. I tend to use Adobe Illustrator for a few specialised tasks, such as making posters for conferences, and this looks like a potentially really good free alternative.

Neuroimaging Made Easy is a blog I found a while ago that I’ve been meaning to share; it’s mostly a collection of tips and downloadable scripts to accomplish fairly specific tasks. They’re all pretty much optimised for Mac users (using AppleScript) and people who use BrainVoyager or FSL for their neuroimaging – SPM users are likely to be disappointed here (but they’re pretty used to that anyway, right?! Heh…). Really worth digging through the previous posts if you fall in the right segments of that Venn diagram though – I’ve been using a couple of their scripts for a while now.

Penultimately, I thought this recent article on Mind Hacks was really terrific – titled: “Psychological self-defence for the age of email”. It covers several relevant psychological principles and shows how they can be used to better cope with the onslaught of e-mail that many of us are often buried under.

Lastly, I hope you’ll pardon a modicum of self-promotion, but I recently did an interview over Skype with the lovely Ben Thomas of http://the-connectome.com/. Unfortunately the skype connection between London and Los Angeles was less than perfect which meant he couldn’t put it up as a podcast, but he heroically transcribed it instead – if you are so inclined, you can read it here.

TTFN.

Some mild pimpage about the Channel 4 program on MDMA: Drugs Live

So, there’s been a bit of press recently about an upcoming (UK) Channel 4 program called Drugs Live. The show will be broadcast next week, on Wednesday and Thursday (that’s the 26th and 27th of September) at 10pm. The reason I’m mentioning it here is because for the last 9 months or so I’ve been heavily involved in an experiment which has involved MRI-scanning volunteers while they’re under the influence of a dose of MDMA, commonly known as ecstasy, and this is what the program will substantially focus on. I’ve been a collaborator on the project, helping out with bits of task-programming, scanning and analysis of data, but the real stars are the project leaders Prof. David Nutt, Prof. Val Curran and Dr Robin Carhart-Harris. I do have to admit to a little ‘squee!’ of excitement when I saw this article on the Guardian website (that’s me in the picture! On the left! Squeee!).

So… if you’re in the UK, be sure to tune in next Wednesday/Thursday for the program. There’ll be a live panel discussion hosted by the always interestingly be-socked-and-tied Jon Snow of Channel 4 news, presentation of some of the results from the experiments and ooh… all kinds of other interesting things. Also, there was a fascinating edition of the (always excellent) BBC radio program ‘The Life Scientific) with Jim Al-Khalili interviewing David Nutt, where he talks about the current research at one point; for anyone interested, it’s well worth a listen. Available on the BBC iPlayer here.

For those outside the UK – you may well be out of luck, I’ve no idea if the program will ever be ‘properly’ broadcast anywhere else. Some altruistic soul might record it and put it up on a torrent site I suppose, but I certainly couldn’t endorse anyone downloading it from an illegal source (*cough*).

More UK press:

The BBC

The Mirror

The Metro (Can’t believe something I’m involved in is in the Metro – this is the absolute pinnacle of my scientific career – it’s all downhill from here.)

Wired (This is a cool article with some other fun videos of people taking drugs on camera.)

Mixmag (Yes! Mixmag! Hahahahaha… *dies laughing*)

And for the sake of balance, here’s a fairly negative take from The Evening Standard (Headline ‘Are they raving mad?’ Good one guys. How long did it take you to come up with that?)

And finally, the Channel 4 trailer for the program:

So… Channel 4 are obviously taking it very seriously and not sensationalising it at all. *Sigh* Don’t forget – next Wednesday/Thursday! 10pm! Channel 4! Be there, or be… I dunno… in the pub?

Oh, and if anyone wants to update my IMDB page for me after the program, that’d be great. Ta.

Bye for now, my lovelies *air kiss, flounces off*.

A snarky rant about grammar – guest post on another blog.

This is pretty much way off-topic, but I’ve just written a guest post on a blog for Longridge Editors. It’s a somewhat ill-tempered rant about grammar, and particularly about the rash of ‘Top 10 grammar mistakes’ blog posts which I’ve been seeing lately. You can read it here.

ThangyewthangyewI’mhereallweektrytheveal.

A psychological analysis of problems in powerpoint presentations.

A God-awful powerpoint slide I found on the web. Don’t do this. Ever.

Powerpoint (or I guess Keynote, if you’re super-cool) presentations – love ’em or loathe ’em, they have become an integral part of the academic and business world. I can’t really imagine doing a lecture or talk without using powerpoint in at least some small way these days. However there’s nothing worse than a bad powerpoint presentation – we’ve all seen them. The colours are garish and clashing, the text is illegible, the organisation is incoherent, and the illustrations are irrelevant or actively misleading. How can we avoid these mistakes in our own presentations, and ensure that we craft a well-structured, pleasant-looking presentation which will add to the impact of what we say, rather than detract from it?

A quick Google of ‘how to make a great powerpoint‘ brings up 144 million pages (including, interestingly, one from Microsoft itself), many of which contain conflicting information (I assume they do anyway, I haven’t read them all). Fear not though, gentle reader; the inimitable Stephen Kosslyn (and colleagues) of Stanford University has just published a paper with the intriguing title of “PowerPoint® presentation flaws and failures: a psychological analysis” in which the common flaws in presentations are deconstructed with an eye to the psychological principles of effective communication. This is great, because it not only points out what’s often wrong with slides, it give some clue as to why these things are wrong. You can read the paper here (free HTML full text – yay!) or download a PDF from the link on the top right.

Kosslyn et al.’s analysis is based on “Eight cognitive communication principles”:

  • Discriminability
  • Perceptual organisation
  • Salience
  • Limited capacity (of working memory)
  • Informative change
  • Appropriate knowledge
  • Compatibility
  • Relevance

…and it’s proposed that optimising presentations in terms of these cognitive principles will produce greater engagement, understanding and retention of the material, by the audience. The authors then followed up this fairly abstract classification with a series of three studies; rating real-world slideshows from various domains (academic, business, governmental) on sub-units of these eight features, showing that flaws are noticeable and annoying to the audience, but also that people often have difficulty identifying the exact flaw in a given slide.

The results suggest that adherence to good practice when designing slides is important, but that a lot of people’s intuitions about what makes a good powerpoint are themselves flawed. Some people may have an ‘eye’ for good, clean design, whereas others might not be able to avoid making some obvious mistakes. I won’t repeat any more of the papers results here, but I urge anyone who relies on powerpoint to go and read the paper and assimilate its findings into their next presentation.

TTFN.

PS. Another excellent write-up of this paper is here.

A primer on digital audio from Engadget

Another very quick link-out post I’m afraid ladies and gentleman – all this damn pushing back of the boundaries of science I’ve been doing lately has left me absolutely no time at all, plus have you ever tried pushing back a boundary? It’s bloody exhausting.

Anyway, following on from my recent post about the auditory gorillas experiment, where I talked a little about audio editing, I spotted a fantastic little piece over on Engadget about the basics of digital audio, covering things like sampling rates, bit-rate, file-formats and loads of other useful/mildly-nerdy stuff. In fact their whole ‘primed’ series (where they dissect common bits of computer technology from first principles) is well worth checking out.

Have a good weekend!