Blog Archives

Towards open-source psychology research

uncle-sam-open-source-311x400A couple of interesting things have come along recently which have got me thinking about the ways in which research is conducted, and how software is used in psychology research.

The first is some recent publicity around the Many Labs replication project  – a fantastic effort to try and perform replications of some key psychological effects, with large samples, and in labs spread around the world. Ed Yong has written a really great piece on it here for those who are interested. The Many Labs project is part of the Open Science Framework – a free service for archiving and sharing research materials (data, experimental designs, papers, whatever).

The second was a recent paper by Tom Stafford and Mike Dewar in Psychological Science. This is a really impressive piece of research from a very large sample of participants (854,064!) who played an online game. Data from the game was analysed to provide metrics of perception, attention and motor skills, and to see how these improved with training (i.e. more time spent playing the game). The original paper is here (paywalled, unfortunately), but Tom has also written about it on the Mind Hacks site and on his academic blog. The latter piece is interesting (for me anyway) as Tom says that he found his normal approach to analysis just wouldn’t work with this large a dataset and he was obliged to learn Python in order to analyse the data. Python FTW!

Anyway, the other really nice thing about this piece of work is that the authors have made all the data, and the code used to analyse it, publicly available in a GitHub repository here. This is a great thing to do, particularly for a large, probably very rich dataset like this – potentially there are a lot of other analyses that could be run on these data, and making it available enables other researchers to use it.

These two things crystallised an important realisation for me: It’s now possible, and even I would argue preferential, for the majority of not-particularly-technically-minded psychology researchers to perform their research in a completely open manner. Solid, free, user-friendly cross-platform software now exists to facilitate pretty much every stage of the research process, from conception to analysis.

Some examples: PsychoPy is (in my opinion) one of the best pieces of experiment-building software around at the moment, and it’s completely free, cross-platform, and open-source. The R language for statistical computing is getting to be extremely popular, and is likewise free, cross-platform, etc. For analysis of neuroimaging studies, there are several open-source options, including FSL and NiPype. It’s not hard to envision a scenario where researchers who use these kinds of tools could upload all their experimental files (experimental stimulus programs, resulting data files, and analysis code) to GitHub or a similar service. This would enable anyone else in the world who had suitable (now utterly ubiquitous) hardware to perform a near-as-dammit exact replication of the experiment, or (more likely) tweak the experiment in an interesting way (with minimal effort) in order to run their own version. This could potentially really help accelerate the pace of research, and the issue of poorly-described and ambiguous methods in papers would become a thing of the past, as anyone who was interested could simply download and demo the experiment themselves in order to understand what was done. There are some issues with uploading very large datasets (e.g. fMRI or MEG data) but initiatives are springing up, and the problem seems like it should be a very tractable one.

The benefit for researchers should hopefully be greater visibility and awareness of their work (indexed in whatever manner; citations, downloads, page-views etc.). Clearly some researchers (like the authors of the above-mentioned paper) have taken the initiative and are already doing this kind of thing. They should be applauded for taking the lead, but they’ll likely remain a minority unless researchers can be persuaded that this is a good idea. One obvious prod would be if journals started encouraging this kind of open sharing of data and code in order to accept papers for publication.

One of the general tenets of the open-source movement (that open software benefits everyone, including the developers) is doubly true of open science. I look forward to a time when the majority of research code, data, and results are made public in this way and the research community as a whole can benefit from it.

MakeHuman – free, open-source software for 3D-modelling of humans

It’s always exciting when you find a new piece of cool software to play with, and even more so when what you’ve found is totally free, open-source, and available on all platforms. So it is with MakeHuman – an utterly awesome bit of kit. I wrote a piece before about FaceGen, which is also pretty cool, but MakeHuman takes it to the next level, by modelling all kinds of body characteristics as well as faces, and, of course, doing it all for free.

I’ve just downloaded it and played with it for a few minutes, but I’m already impressed by the range of options available. Through a very simple slider and radio-button based interface you have very fine control over all kinds of variables, including gender, weight, age, height, and many more, with endless fine tweak-ability possible of body and face if you dig through the options. There are also basic libraries of clothes and poses included. Here’s a well, a human, I made in just a couple of minutes:

Screen Shot 2013-08-08 at 09.45.18

And here’s a close-up of the face, after I added some hair and gave him a nasty expression:

Screen Shot 2013-08-08 at 09.47.05

Pretty cool indeed. This could potentially be a massively useful tool for people interested in face/body perception – using this, one could generate a large number of highly-controlled experimental stimuli that just differ in one aspect (say, weight, or race… whatever) very easily and quickly. Download it and have a play around!

Website of the week: Cogsci.nl. OpenSesame, illusions, online experiments, and more.

A quick post to point you towards a great website with a lot of really cool content (if you’re into that kind of thing, which if you’re reading this blog, then I assume you probably are… anyway, I digress; I apologise, it was my lab’s Christmas party last night and I’m in a somewhat rambling mood. Anyway, back to the point).

So, the website is called cogsci.nl, and is run by a post-doc at the University of Aix-Marseille called  Sebastiaan Mathôt. It’s notable in that it’s the homepage of OpenSesame –  a very nice-looking, Python-based graphical experiment builder that I’ve mentioned before on these very pages. There’s a lot of other cool stuff on the site though, including more software (featuring a really cool online tool for instantly creating Gabor patch stimuli), a list of links to stimulus sets, and a selection of really-cool optical illusions. Really worth spending 20 minutes of your time poking around a little and seeing what’s there.

I’ll leave you with a video of Sebastiaan demonstrating an experimental program, written in his OpenSesame system, running on a Google Nexus 7 Tablet (using Ubuntu linux as an OS). The future! It’s here!

AutoHotKey – create custom macros, and remap your computer inputs

A brief post about a fantastically useful little utility – AutoHotKey. This is a small, free, very flexible and powerful program for Windows which has potentially unlimited usefulness. It allows the user to either a) define a string of mouse/keyboard inputs (i.e. macros) which can be triggered with a single key-press, or b) re-map a particular keyboard/mouse/whatever input to act as if it’s any other kind of input. All this can be achieved either with a fairly simple scripting language, or by the use of AutoScriptWriter – a means of ‘recording’ a sequence of inputs that can then be ‘played back’ at much faster rates.

I use it for some of my fMRI stimulus programs. The MRI scanner sends out a TTL-like pulse at the beginning of every functional volume acquisition (or ‘TR’) and this can be used to synchronise with external equipment – in fMRI it’s important to know exactly when your stimuli were presented (relative to the volume acquisition sequence) in order to generate an accurate statistical model. The pulse from the scanner goes into a little USB adapter doo-hickey, which is plugged into the computer running the stimulus program – the USB box simulates a joystick button-press every time it receives a scanner pulse. This works great, except that when I’m writing my programs in my office, I don’t have a joystick, and also sometimes I want to start my programs manually (for demo purposes, or whatever). The solution? I write all my programs to start with a left-mouse click, and run an AutoHotKey script on the stimulus-program computer in the scanner room which transforms the game-port input to a left-mouse-click. This is the script in its entirety:

Joy1::Send {LButton}

Simple, huh? In this way, my programs can start either with a left-mouse-clickorwhen they receive the input from the scanner on the game-port, and I don’t have to change them at all when I move them from my office machine to the scanner-room computer. It works perfectly. There’s loads of good help, advice and sample scripts on the AHK website/forums – a quick search will usually bring up something at least somewhat-related to what you want to achieve.

It’s also possible to play a very mean prank on somebody by running a script on their computer re-mapping several (or even all) of their keyboard keys to other random keys, but the less said about that the better…

Mac users are out of luck with AutoHotKey I’m afraid – it’s Windows only. However the built-in Automator application in Mac OS X has a lot of the same macro-like functionality, and is pretty easy to use once you get the hang of it. For the input re-mapping side of things IronAHK looks like a good (and very powerful) free option, and KeyRemap4MacBook also looks good – less powerful, but a much more user-friendly interface.

Happy key-hacking! If you find a good use for AHK, then let me know in the comments. TTFN.

The effects of hardware, software, and operating system on brain imaging results

A recent paper (Gronenschild et al., 2012) has caused a modicum of concern amongst neuroimaging researchers. The paper documents a set of results based on analysis of anatomical MRI images using a popular free software tool called FreeSurfer, and essentially reports that there are (sometimes quite substantive) differences in the results that it produces, depending on the exact version of the software used, and whether the analyses were carried out on a Mac (running OS X) or a Hewlett Packard PC (running Linux). In fact, even the exact version of OS X on the Mac systems was also shown to be important in replicating results precisely.

Figure 3 of Gronenschild et al. (2012) showing the effect of different versions of FreeSurfer on obtained grey-matter volume results. Percentage scale at the top, p-values on the bottom.

The fact that results differ from one version of FreeSurfer to another is perhaps not so surprising – after all, we expect that newer versions of software should be ‘improved’ in important ways, otherwise, what would be the point in releasing them? However, the fact that results differ between operating systems is a little more worrying – in theory any operating system capable of running the software should produce the same result. The authors recommendations are that 1) Researchers should not switch from one version/operating system/platform to another in the middle of a research project, and 2) that when reporting results software version numbers, and the workstation/OS used should all be documented. This seems broadly sensible.

It got me thinking about neuroimaging software more generally as well though. In general, people don’t do detailed evaluations of software of the kind reported by Gronenschild et al. (2012).  As an enthusiastic user of several fMRI-related packages (I’m currently using SPM, FSL and BrainVoyager, all on different projects) I’ve often wondered what the real differences were between them, in terms of the results they produce. Given how many people around the world use brain imaging software, you might think that some detailed evaluations would be floating around, but in fact there are very few.

I think there are several reasons for this:

1. It’s (perhaps understandably) regarded as a waste of time. After all, we (meaning researchers who use this software) are generally more interested in how the brain works, than by how software works. Neuroimaging is difficult and time-consuming and we all need to publish papers to survive – it makes more sense to spend our time on ‘real’ brain-related research.

2. Most people have one (or at most two) pieces of software that they like to use for neuroimaging, and they stick with it; I’m somewhat unusual in this respect. The fact that most people use just one package more-or-less exclusively means there’s a dearth of people who actually have the skills necessary to do cross-evaluation of packages. Again, this is understandable – why take the time to learn a new system, if you’re happy with the one you’re using?

3. The differences between the packages make precise comparison of end-results difficult. Even though all the packages use an application of the General Linear Model for basic analysis, other differences in pre-processing conceivably play a role. For instance, FSL handles the spatial transformation of functional data somewhat differently to other packages.

Having said that, there have been a few papers which have tried to do these kind of evaluations. Two examples are here (on motion correction) and here (on segmentation). Another somewhat instructive paper is this one, which summarises the results of a functional-imaging analysis contest held as part of the Human Brain Mapping meeting in Toronto in 2005; developers of popular neuroimaging software were all given the same set of data and asked to analyse it as best they could. Interesting stuff, but as the contestants all used somewhat different methods to get the most out of the data, it’s hard to draw direct comparisons.

If there’s a moral to this story, it’s that (as the recent Gronenschild et al. paper demonstrates) we need to pay close attention to this kind of thing. As responsible researchers we cannot simply assume our results will be replicable with different hardware and software, and detailed reporting of not just the analysis procedures, but also the tools used to achieve the results seems a simple and robust way of at least acknowledging the issue and enabling more precise replicability. Actually solving the issues involved is a substantially more difficult problem, and may be a job for future generations of researchers and developers.

See also:
My previous post on comparisons of different fMRI software: Herehere and here.
Neuroskeptic has also written a short piece on the recent paper mentioned above.

TTFN.

Behavioural/Experimental software for psychology… A teaser.

When I started this blog, one of the main reasons for doing so was to talk about how to program and run psychology experiments. I’ve made a couple of low-level forays into those areas in the past, but I’ve always intended to put up some reviews, handy hints, and maybe even some completed programs related to particular pieces of specialised experimental software.

Unfortunately, this post is not going to do that. I started aimlessly browsing a load of websites this morning looking at the options available for this kind of software, and quickly realised that a) I needed to do a lot more reading and work if I was going to write anything which could hope to be even moderately comprehensive, and b) that there are already some really rather good sites that already exist and can serve as an introduction to this sort of thing.

For instance, as a starting point, you could do a lot worse than this wikipedia page, which lists a bunch of the more well-known behavioural software packages and includes some helpful information about platforms, interface, and cost. This little snippet of a page on the Cambridge MRC-CBU website is also of interest, as it shows the results of a survey of researchers and what packages they use (quite old though; 2006).

Lastly, I urge you to check out this heroically comprehensive collection of information and links curated by Hans Strasburger, who works at the universities of München and Göttingen. There is an awful lot to digest on this web-page, but it’s packed full of solid-gold nuggets of greatness. It’s mostly skewed towards visual psychophysics-type experimentation, but there’s an awful lot of value here for any kind of psychology researcher.

At some point, I’ll do a ‘proper’ post (or more likely, series) on experimental software with reviews, examples etc., but these links should keep you busy enough until then.

TTFN.

I want to be a Robopsychologist

I was a bit of a weird kid. While my school-mates were running around in the playground playing football and mindlessly inflicting minor injuries on each other I used to sit on a bench reading 1960s science fiction books.* One of my favourite authors at the time was Isaac Asimov, and I was particularly captivated by his Robot stories, and the character of Susan Calvin, with her neologistic job description of ‘robopsychologist’. What Asimov astutely recognised back in the 60s was that if artificial intelligence of even a relatively simple and tightly-bounded nature was ever to be created, its behaviour would be complex, often unpredictable, and even occasionally aberrant. Susan Calvin’s job description is therefore to interpret and understand robot behaviour, and to manage and study their interactions with humans.

50 years on from the publication of Asimov’s initial robot books (and coincidentally, roughly in the time period in which Asimov set his stories), robopsychology might just be beginning to emerge as a discipline. While the robots we are currently developing are clunky simpletons compared to Asimov’s capable, graceful, positronic-brained creations, the behaviour they exhibit is arguably starting to be at a level where some serious questions need to be asked and investigated. A Google Scholar search for the term ‘robopsychology‘ turns up 22 results, most of which are spurious, but three in particular stand out. The first two are reviews by Alexander and Elena Libin, published in 2004 (PDF) and 2005 (PDF), and deal with similar themes. These authors seek to establish a set of principles by which person-robot interaction might be studied, and also present some findings derived from their use of a robotic cat with various cross-cultural and clinical populations (incidentially defining the term ‘robotherapy’ for apparently the first time). The third source found by Google Scholar is an unpublished MSc thesis from Diego J. Mejias‐Sanabria at UCL (PDF) which details some theoretical and experimental studies of the impact of different physical features on human-robot interaction, and particularly on the strength and type of relationship that is produced.

While clearly of great interest, this preliminary work is focussed almost entirely on the human side of the equation, with little to be said about the psychology of the robot itself. This is understandable, as most robots now commonly in use, while sophisticated in many ways, are capable of only a relatively simple set of pre-programmed behaviours. As such, there is relatively little to investigate from a psychological perspective. However, all that may be about to change. The inspiration for this blog post actually came from some stunning work performed by a company named TheCorpora, using the open-source, linux-distro-powered Qbo robot. In the video below, Qbo learns to recognise itself in a mirror:

And in this second video, Qbo learns to distinguish a view of itself in a mirror from another Qbo robot, using a flashing light pattern on its nose. Once the other robot is recognised as a different entity, a short conversation between the two is carried out:


Needless to say, this is some seriously impressive stuff. Astute students of behaviour will recognise the setup in the videos as an example of the Mirror Test, first devised by Gordon Gallup in the 1970s, and used as one way of guaging the level of various animal species’ self-awareness. The nose-flashes that Qbo uses to recognise itself are even equivalent to the methods used in an elaboration of the mirror test where odourless dye-spots are painted on animals in order to get a clearer behavioural indication of whether they respond to the mirror as an image of themselves.

This work done with the Qbo robot clearly raises a whole host of questions, principal among them being (to my mind at least), just what exactly is happening here? The exact interpretation of success or failure by a particular species/individual at the mirror test is a matter of open debate, and the arguments about what the results might mean quickly spin off into the realms of unfettered philosophising. Some have argued that the mirror test is unsuitable for assessing species which use odour or auditory cues more extensively than humans, if this is the case, how suitable is it for assessing the self-awareness of a robot? It could be that Qbo is merely performing a relatively mechanical set of pre-programmed responses when confronted with the mirror, however it could just be possible that something a lot deeper, and a lot more interesting, is going on.

The point I’m hoping to demonstrate here is that an understanding of what’s going on in these videos may require an understanding not just of the mechanics, programming and behaviour of Qbo, but fundamentally of its psychology. We need to know, essentially, what’s going on in its head when it recognises itself in the mirror. Such an understanding would doubtless be hugely valuable in driving further research and development of artificial intelligence, but could conceivably shed light on the development of consciousness and self-reflective abilities in humans and other species. For this, we obviously need robopsychologists, and with the pace of development of robot abilities and their increasing penetration into society it’s not unlikely that such needs may become pressing within the next 10-20 years. The field is currently so nascent as to be practically zygotic, and it may be some time before it produces the real equivalent of Susan Calvin, however, I have little doubt that given time, it will.

And that is why I want to be a robopsychologist. I leave you with a quote from the Master himself:

“Individual science fiction stories may seem as trivial as ever to the blinder critics and philosophers of today — but the core of science fiction, its essence, the concept around which it revolves, has become crucial to our salvation if we are to be saved at all.”

My Own View” in The Encyclopedia of Science Fiction (1978) edited by Robert Holdstock; later published in Asimov on Science Fiction (1981).

TTFN.

* The word ‘nerd’ hadn’t been invented yet, or at least wasn’t in common usage where I grew up, so I was a ‘boffin’ to my contemporaries, the heartless little cretins.

Reference Managers and Citing-While-You-Write

Another quickie link-out type post (yes, alright, I’ll sit down and write a proper post sometime soon – it’s been a busy few weeks, OK?), this time to a… let’s say, a spirited discussion over on DrugMonkey’s blog on my very favourite topic – reference management. Well, it’s actually about writing and citing at the same time, but there’s loads of good work-flow related tips related to referencing and reference-management software.

Once again, for those of you at the back – if you’re a student and you’re not using something like Mendeley, then you are a) making your life much harder than it needs to be, and b) a massive idiot.

Image Morphing and Psychology Research – A Case Study

As an example of the ways in which technology and psychology have developed together recently, I thought it would be fun to do a little case-study of a particular area of research which has benefitted from advances in computer software over recent years. Rather than talk about the very technical disciplines like brain imaging (which have of course advanced enormously recently) I thought it would be more fun to concentrate on an area of relatively ‘pure’ psychology, and one of the most important and fundamental cognitive processes which is present pretty much from birth; face perception.

In November 1991 Michael Jackson released the single ‘Black or White’; the first to be released from his eighth album ‘Dangerous’. The single is of some significance as it marked the beginning of Jackson’s descent from the firmament of music stardom into the spiral of musical mediocrity and personal weirdness which only ended with his death in 2009, but for the purposes of the present discussion it was interesting because of part of its accompanying video. Towards the end of the video a series of people of both sexes and of various ethnic groups are shown singing along with the song and the images of their faces morph into each other in series:

Read the rest of this entry

Reference Management Software – Yes, again. It’s important.

Protoscholar recently, and very kindly, linked to my previous post on computing skills for students and made two very pertinent comments, which you can read here. The first comment was that I’d missed out any kind of software for doing qualitative analysis. This omission is entirely a product of my own ignorance I’m afraid – I come from a very experimental background and know very little about qualitative research and the relevant tools available. I’m happy to link to protoscholar’s article and the recommendations for qualitative software made there.

The second comment was that ‘Reference Management Software’ and ‘Do regular backups’ were too important to be filed away at the end under ‘Miscellaneous’. This is absolutely right – in fact I regard the use of Reference Management software to be the absolute number one, top tip that every student, post-grad or academic needs to know. I notice there are already some good articles on protoscholar’s site about various bits of software, so I’m linking to them here.

Just to reiterate – if you’re a student and you’re not using some kind of reference management software, you’re making your life so much more difficult than it needs to be. It doesn’t really matter which one you choose, as long as you use something!

All via protoscholar.com:

A very useful chart on different features of the most popular RM software.
A useful article on organising your research.

Zotero.
My Favourite RM tool – Mendeley.

TTFN.