Category Archives: Hardware

3D printed response box. The future. It’s here.



This may not look like much, but it’s actually pretty cool. It’s a new five-button response-box being built for our MRI scanner by one of the technicians where I work. The cool thing is that the main chassis has been custom-designed, and then fabricated using a polyethylene-extruding 3D printer. The micro-switches for the four fingers and thumb and the wired connections for each have obviously been added afterwards.

3D printing in plastic is a great way of creating hardware for use in the MRI environment, as, well… it’s plastic, and it can create almost any kind of structure you can think of. We sometimes need to build custom bits of hardware for use in some of our experiments, and previously we’d usually build things by cutting plastic sheets or blocks to the required shape, and holding them together with plastic screws. Using a 3D-printer means we can produce solid objects which are much stronger and more robust, and they can be produced much more quickly and easily too. I love living in the future.

Some notes on the use of voice keys in reaction time experiments

Cedrus SV-1 voice key device

Somebody asked me about using a voice key device the other day, and I realised it’s not something I’d ever addressed on here. A voice key is often used in experiments where you need to obtain a vocal response time, for instance in a vocal Stroop experiment, or a picture-naming task.

There are broadly two ways of doing this. The first is easy, but expensive, and not very good. The second is time-consuming, but cheap and very reliable.

The first method involves using a bit of dedicated hardware, essentially a microphone pre-amp, which detects the onset of a vocal response, and sends out a signal when it  occurs. The Cedrus SV-1 device pictured above is a good example. This is easy, because you have all your vocal reaction times logged for you, but not totally reliable because you have to pre-set a loudness threshold for the box, and it might miss some responses, if the person just talks quietly, or there’s some unexpected background noise. It should be relatively simple to get whatever stimulus software you’re running to recognise the input from the device and log it as a response.

The other way is very simple to set up, in that you just plug a microphone into the sound card of your stimulus computer and record the vocal responses on each trial as .wav files. Stimulus software like PsychoPy can do this very easily. The downside to this is that you then have to take those sound files and examine them in some way in order to get the reaction time data out – this could mean literally examining the waveforms for each trial in a sound editor (such as Audacity), putting markers on the start of the speech manually, and calculating vocal RTs relative to the start of the file/trial. This is very reliable and precise, but obviously reasonably time-consuming. Manually putting markers on sound files is still the ‘gold standard’ for voice-onset reaction times. Ideally, you should get someone else to do this for you, so they’ll be ‘blind’ to which trials are which, and unbiased in calculating the reaction times. You can also possibly automate the process using a bit of software called SayWhen (paper here).

Example of a speech waveform, viewed in Audacity

Example of a speech waveform, viewed in Audacity

Which method is best depends largely on the number of trials you have in your experiment. The second method is definitely superior (and cheaper, easier to set up) but if you have eleventy-billion trials in your experiment, manually examining them all post hoc may not be very practical, and a more automatic solution might be worthwhile. If you were really clever you could try and do both at once – have two computers set up, the first running the stimulus program, and the second recording the voice responses, but also running a bit of code that signals the first computer when it detects a voice onset. Might be tricky to set up and get working, but once it was, you’d have all your RTs logged automatically on the first computer, plus the .wav files recorded on the second for post hoc analysis/data-cleaning/error-checking etc. if necessary.

Happy vocalising!


Two researchers have pointed out in the comments, that a system for automatically generating response times from sound-files already exists, called CheckVocal. It seems to be designed to work with the DMDX experimental programming system (free software that uses Microsoft’s DirectX system to present stimuli). Not sure if it’ll work with other systems or not, but worth looking at… Have also added the information to my Links page.

Dr MacLove, or How I Learned to Stop Worrying and Love Apple

tumblr_lvuw7pdsFg1r2r3c4o1_500Back in the 90s it was easy; if you were a graphic designer, or some kind of proto-hipster with a trust fund you used a Mac. Everyone else used a PC. Then in the 2000s Apple started making iThings, everyone started going absolutely batshit crazy over them, and suddenly Macs were everywhere as well.

I’ve used both in parallel since about 2003 – I started off with a G5 power mac as a desktop complemented by a Windows laptop, but that’s now reversed with a Windows 7 PC on my desk at work, and a MacBook Air. This shift was significant – the desktop is what’s provided to me by my job, the laptop is my personal computer; what I choose to buy for myself.  Despite using OS X since 2003 I only really started liking it when I got my first Apple laptop – a 2009 MacBook Pro. This was also around the time that I got an iPhone 3G, which seemed like some incredible advanced artefact from the future compared to the chunky ‘smartphone’ I was using before that ran Windows Mobile 6.5; an unbelievably awful OS which I could never get to work as it should. I’ve since swapped the Pro for a 2012 MacBook air, bought an iPad mini, and am on my third iPhone, so my conversion is pretty much complete. I’ve looked at Android ‘phones and tablets, honestly, I have. Some of them are very nice, but the OS just always seems too…  busy. Maybe it’s my age, but I just want something I can pick up and use without a massive learning curve. I’m happy to stand up and say I’m an Apple-guy, and it took a while, but I’m finally actually OK with that.

It took a while, but I’ve now found Mac versions or fairly close equivalents for all the software I used on my PC. At first I sometimes used to boot into Windows using bootcamp to use a couple of applications, but I deleted the partition a while ago – I just wasn’t using it anymore. I probably won’t be spending money on any Windows machines for the foreseeable future. I know that Mac vs. Windows is one of the most hackneyed, pointless and bitter debates on the entire internet, but I just couldn’t resist setting my own bit of troll-bait out. Here, then, are the major reasons I became a Mac convert – your mileage may vary, personal opinions only, blah blah.

The MacBook Air
The Air is the machine that kicked off the ultrabook trend and, to my mind, PC manufacturers have still yet to equal the Air’s amazing combination of power, usability and portability. My 2012 model is greased-lightning-off-a-shovel fast – it chews through a set of fMRI pre-processing twice as fast as my old MB Pro, and that was no slouch either. The 2013 models are even faster, with better graphics and a frankly ridiculous 12-hour battery life. If you can live with a relatively small (128/256Gb) amount of storage, it’s a peach of a machine. Plus, I can carry it around all day and barely even notice the weight. For my money, the Air is the best value computer out there – I don’t think the step-up in performance you get with the Pro is worth the price, personally.

The Apple Trackpad
Using the trackpad on a Windows laptop feels like going back to the stone age after you’re used to the fantastic set of multi-touch gestures on an Apple laptop. Have never found one on a PC that even comes close.

Migration Assistant
Remember the excitement of getting a new computer and then the agony of re-installing all your applications, and tweaking the system to get it the way you like it? That pain doesn’t exist for me anymore. Apple’s Migration Assistant lets you do a time-machine back-up of your old computer on to an external  drive, plug that into the new one and everything is reproduced; your applications, desktop, OS settings, bookmarks, everything. It’s awesome.

OS X’s system of virtual desktops is brilliant, and essential for me, now that I’ve got used to it; flipping between desktops with ctrl+left/right arrow keys is fast and smooth, and means you can really extend the limits of what can be done on a 13″ laptop screen. I have no idea why Windows doesn’t implement virtual desktops.

In the last couple of years I’ve switched to using FSL as my main fMRI-analysis platform. FSL is developed on Macs, runs well on other Unix systems, but needs some kind of unix-emulation to run on Windows. Urgh – forget it. I do like being able to open up a terminal and institute little tweaks to the OS and applications as well. Of course Matlab/SPM and BrainVoyager also run beautifully on OS X.

To install an application on OS X you drag it to a folder. To uninstall it you drag it to the recycle bin. That’s it.

Mac-only software
Osirix is without any shadow of a doubt, the best free DICOM image viewer available, and it’s Mac-only. Other things like Automator I’d really miss too, plus of course Apple’s super-fast and comprehensive spotlight search is awesome.

No crapware
You know all that shit you have to uninstall as soon as you get a new PC? Free trials of anti-virus software, media players, desktop icons that link to shitty Yahoo services you have no intention of ever using? Doesn’t exist in OS X.


Having said all that, of course there are annoying things that drive me crazy about OS X too. No system is perfect after all…

No Cut/Paste
You can copy and paste files between two file locations, but you can’t CUT and then paste. Seriously Apple, is this really a problem?

Annoying behaviour of the green button
The green button at the top of the window that I still think of as the ‘maximise’ button – it’s annoying. It seems to re-size the window pretty much randomly. I hate it.

For the love of all that is holy Apple, will you please do something about the benighted clusterfuck that is iTunes? It’s utterly heinous.

Feel free to disagree with me in the comments. If you think Windows 8 is the greatest OS ever devised, please say so. Personally I think it’s a botched, compromise that tries to bring touch-functionality to laptops and laptop-functionality to tablets and does neither well, but hey, that’s just my opinion. Windows is like Star Trek movies – every other one in the series is decent, which means Windows 9 should actually be pretty usable.

Anyway – flame on!

Psychology experiments enter the post-PC era: OpenSesame now runs on Android

smartphones-picard-uses-androidI’ve mentioned OpenSesame briefly on here before, but for those of you who weren’t keeping up, it’s a pretty awesome, free psychology experiment-developing application, built using the Python programming language, and it has a lot in common with PsychoPy (which is also awesome).

The recently-released new version of OpenSesame has just taken an important step, in that it now supports the Android mobile operating system, meaning that it can run natively on Android tablets and smartphones. As far as I’m aware, this is the first time that a psychology-experimental application has been compiled (and released to the masses) for a mobile OS.

This is cool for lots of reasons. It’s an interesting technical achievement; Android is a very different implementation to a desktop OS, being focused heavily on touch interfaces. Such interfaces are now ubiquitous, and are much more accessible, in the sense that people who may struggle with a traditional mouse/keyboard can use them relatively easily. Running psychology experiments on touch-tablets may enable the study of populations (e.g., the very young, very old, or various patient groups) that would be very difficult with a more ‘traditional’ system. Similarly, conducting ‘field’ studies might be much more effective; I can imagine handing a participant a tablet for them to complete some kind of task in the street, or in a shopping mall, for instance. Also, it may open up the possibility of using the variety of sensors in modern mobile devices (light, proximity, accelerometers, magnetometers) in interesting and creative ways. Finally, the hardware is relatively cheap, and (of course) portable.

I’m itching to try this out, but unfortunately don’t have an Android tablet. I love my iPad mini for lots of reasons, but the more restricted nature of Apple’s OS means that it’s unlikely we’ll see a similar system on iOS anytime soon.

So, very exciting times. Here’s a brief demo video of OpenSesame running on a Google Nexus 7 tablet (in the demo the tablet is actually running a version of Ubuntu Linux, but with the new version of OpenSesame it shouldn’t be necessary to replace the Android OS). Let me know in the comments if you have any experience with tablet-experiments, or if you can think of any other creative ways they could be used.



Another miscellaneous grab-bag of goodies, links ‘n’ stuff

the-linksIn lieu of a ‘proper’ post (forgive me, dear readers, the vicious task-masters at my proper job have been wielding the whip with particular alacrity recently) I’m putting together a list of links to cool things that I’ve come across lately.

So, in no particular order:

Tal Yarkoni’s outstanding Neurosynth website has now gone modular and open-source, meaning you can embed the code for the brain-image viewer into any website, and use it to present your own data – this is seriously cool. Check out his blog-post for the details.

An interesting little comment on “Why Google isn’t good enough for academic search”. Google scholar tends to be my first port of call these days, but the points made in this discussion are pretty much bang-on.

A fantastic PNAS paper by Kosinski et al. (2013; PDF) that demonstrates that personal attributes such as sexual orientation, ethnicity, religious and political views, some aspects of personality, intelligence and many others, can be automatically and accurately (to a fairly startling degree, actually) predicted merely from analysis of Facebook ‘Likes’. A fantastic result, that really demonstrates the value of doing research using online data.

Next up is Google Refine – an interesting little idea from Google intended to assist with cleaning up and re-formatting messy data. Looks like it could be promisingly useful.

A really seriously great website on the stats language R, designed to make the transition for SPSS and SAS users as easy as possible – very clear, very nicely explained. Beautiful stuff.

Another cool website called; you fill in fields (author, title, etc.) for sources you wish to cite, and it creates a perfectly formatted bibliography for you in the style (APA, Harvard etc.) you choose. A cool idea, but in practice, filling out the fields would be incredibly tedious for anything more than a few sources. Good place to learn about how to format things for different types of reference though.

I’ve previously written about the use of U-HID boards for building USB response devices; I’ve just been made aware of a similar product called Labjack, which looks even more powerful and flexible. A Labjack package is included in the standard distribution of PsychoPy too, which is cool. I’m becoming more and more a fan of PsychoPy by the way – I’m now using it on a couple of projects, and it’s working very well indeed for me.

Now a trio of mobile apps to check out. Reference ME is available for both iOS and Android, and creates a citation in a specific style (Harvard, APA, etc.) when you scan the barcode of a book – very handy! The citations can then be emailed to you for pasting into essays or whatever.

The Great Brain Experiment is a free app from the Wellcome Trust (download links for both iOS and Android here) created in collaboration with UCL. The aim is to crowdsource a massive database on memory, impulsivity, risk-taking and other things. Give it a whirl – it’s free!

Lastly Codea is a very cool-looking iPad-only app that uses the Lua programming language to enable the (relatively) easy development and deployment of ‘proper’ code, entirely on the iPad. Very cool – Wired called it ‘the Garage Band of coding’, and while it’s probably not quite that easy to use, it’s definitely worth checking out if you want to use your iPad as a serious development tool.

If you’re still hungry for more internet goodies, I encourage you most heartily to check out my Links page, which is currently in an ongoing phase of rolling development (meaning, whenever I find something cool, I put it up there).



Website of the week: OpenSesame, illusions, online experiments, and more.

A quick post to point you towards a great website with a lot of really cool content (if you’re into that kind of thing, which if you’re reading this blog, then I assume you probably are… anyway, I digress; I apologise, it was my lab’s Christmas party last night and I’m in a somewhat rambling mood. Anyway, back to the point).

So, the website is called, and is run by a post-doc at the University of Aix-Marseille called  Sebastiaan Mathôt. It’s notable in that it’s the homepage of OpenSesame –  a very nice-looking, Python-based graphical experiment builder that I’ve mentioned before on these very pages. There’s a lot of other cool stuff on the site though, including more software (featuring a really cool online tool for instantly creating Gabor patch stimuli), a list of links to stimulus sets, and a selection of really-cool optical illusions. Really worth spending 20 minutes of your time poking around a little and seeing what’s there.

I’ll leave you with a video of Sebastiaan demonstrating an experimental program, written in his OpenSesame system, running on a Google Nexus 7 Tablet (using Ubuntu linux as an OS). The future! It’s here!

Buying some new gadgets for college? Engadget has you covered.

So, it’s the time of year when A-level results come out (in the UK, anyway) and students’ thoughts fondly turn to the start of the college/University year in October when they can finally experience some spatial (if perhaps not financial) independence from their parents. And these days, if you aren’t already fully equipped with all the tools necessary to make a success of your time at University then it’s time to start smiling sweetly at Mum and Dad to make sure they’ll give you what you need in time for the start of term. And by ‘tools’ I mean technology, not a six-foot bong and a jumbo-pack of prophylactics.*

Fortunately, Engadget has you covered for all your gadget-related decisions with their excellent annual back-to-school guides. These are short reviews of the top picks by the editors at Engadget in a variety of categories of gadgets/technology such as laptops, digital cameras and electronic readers. Useful stuff if you’re pondering a new purchase to get you through the school year, and there’ll be more to come in the next few weeks so keep checking Engadget.


*Though, those wouldn’t hurt as well.

Tablet computers (iPad, Nexus 7, etc.) for children with developmental disorders

A very minimal post merely to point any interested readers towards an interesting discussion going on in the comments section of a post on Engadget here. A reader asked for suggestions for a tablet and/or apps for his developmentally-delayed daughter, and a large number of people have contributed some useful ideas and links. Just try to ignore the (inevitable *sigh*) Android vs. iOS fan-boy squabbling.

Seriously cool toys – Tobii mobile eye-tracking glasses, Pivothead HD video-recording eye-wear, and the Affectiva Q-sensor

The Tobii mobile eye-tracking system. Awesome.

The other day I was lucky enough to be able to help out with a bit of data-collection in a well-known London department store, being run by the magnificent Tim Holmes of Acuity Intelligence. This meant that I got to examine some seriously cool bits of new hardware – and new gadgets (especially scientific ones) are basically my kryptonite, so it was generally pretty exciting.

The first thing we used was a mobile eye-tracking system designed and built by Tobii. These have two cameras in – one front-facing to record video of what the participant is looking at, and another infra-red camera to record the participant’s exact eye-position. They can also capture sound in real-time too, and record the eye-tracking data at 30Hz. The system comes with a clip-on box where the data is actually recorded (in the background of the picture on the right) and which is also used for the (fairly brief and painless) initial calibration. It seems like a really great system – the glasses are very light, comfortable and unobtrusive – and could have a really broad range of applications for research, both scientific and marketing-related.

The next cool toy I got to play with was a pair of these:

Pivothead ‘Durango’ HD video-recording glasses. Double awesome.

These are glasses with a camera lens in the centre of the frame (between the eye-lenses) which can record full high-definition video – full 1080p at 30 fps, using an 8Mp sensor. Amazing! They have an 8GB onboard memory which is good for about an hour of recording time, and also have a couple of discreet buttons on the top of the right arm which can be used for taking still pictures in 5-picture burst or 6-picture time-lapse mode. They’re made by a company called Pivothead, and seem to be more intended for casual/recreational/sports use rather than as a research technology (hence the ‘cool’ styling). They’re a reasonably bulky pair of specs, but very light and comfortable, and I don’t think you’d attract much attention filming with them. It’s worth checking out the videos page at their website for examples of what they can do. They’re also only $349 – a lot for a pair of sunglasses, but if you can think of a good use for them, that seems like a snip. If you’re in the UK, they’re also available direct from the Acuity Intelligence website for £299, inc. VAT. I wonder how long it’ll be before they start showing up in law-enforcement/military situations?

The third device I got to geek-out over was one of these little beauties:

The Affectiva mobile, wrist-worn, bluetooth GSR sensor. Triple awesome.

This is a ‘Q-Sensor’, made by a company called Affectiva and is about the size of an averagely chunky wristwatch. It has two little dry-contact electrodes on the back which make contact with the skin on the underside of the wrist, and also contains a 3-axis accelerometer and a temperature sensor. This little baby claims to be able to log skin conductance data (plus data from the other sensors) for 24 hours straight on a single charge, and will even stream the data ‘live’ via Bluetooth to a connected system for on-the-fly analysis. It seems like Affectiva are mainly pitching it as a market research tool, but I can think of a few good ‘proper’ research ideas that this would enable as well. This is seriously cool technology.

That’s all folks – TTFN.

The effects of hardware, software, and operating system on brain imaging results

A recent paper (Gronenschild et al., 2012) has caused a modicum of concern amongst neuroimaging researchers. The paper documents a set of results based on analysis of anatomical MRI images using a popular free software tool called FreeSurfer, and essentially reports that there are (sometimes quite substantive) differences in the results that it produces, depending on the exact version of the software used, and whether the analyses were carried out on a Mac (running OS X) or a Hewlett Packard PC (running Linux). In fact, even the exact version of OS X on the Mac systems was also shown to be important in replicating results precisely.

Figure 3 of Gronenschild et al. (2012) showing the effect of different versions of FreeSurfer on obtained grey-matter volume results. Percentage scale at the top, p-values on the bottom.

The fact that results differ from one version of FreeSurfer to another is perhaps not so surprising – after all, we expect that newer versions of software should be ‘improved’ in important ways, otherwise, what would be the point in releasing them? However, the fact that results differ between operating systems is a little more worrying – in theory any operating system capable of running the software should produce the same result. The authors recommendations are that 1) Researchers should not switch from one version/operating system/platform to another in the middle of a research project, and 2) that when reporting results software version numbers, and the workstation/OS used should all be documented. This seems broadly sensible.

It got me thinking about neuroimaging software more generally as well though. In general, people don’t do detailed evaluations of software of the kind reported by Gronenschild et al. (2012).  As an enthusiastic user of several fMRI-related packages (I’m currently using SPM, FSL and BrainVoyager, all on different projects) I’ve often wondered what the real differences were between them, in terms of the results they produce. Given how many people around the world use brain imaging software, you might think that some detailed evaluations would be floating around, but in fact there are very few.

I think there are several reasons for this:

1. It’s (perhaps understandably) regarded as a waste of time. After all, we (meaning researchers who use this software) are generally more interested in how the brain works, than by how software works. Neuroimaging is difficult and time-consuming and we all need to publish papers to survive – it makes more sense to spend our time on ‘real’ brain-related research.

2. Most people have one (or at most two) pieces of software that they like to use for neuroimaging, and they stick with it; I’m somewhat unusual in this respect. The fact that most people use just one package more-or-less exclusively means there’s a dearth of people who actually have the skills necessary to do cross-evaluation of packages. Again, this is understandable – why take the time to learn a new system, if you’re happy with the one you’re using?

3. The differences between the packages make precise comparison of end-results difficult. Even though all the packages use an application of the General Linear Model for basic analysis, other differences in pre-processing conceivably play a role. For instance, FSL handles the spatial transformation of functional data somewhat differently to other packages.

Having said that, there have been a few papers which have tried to do these kind of evaluations. Two examples are here (on motion correction) and here (on segmentation). Another somewhat instructive paper is this one, which summarises the results of a functional-imaging analysis contest held as part of the Human Brain Mapping meeting in Toronto in 2005; developers of popular neuroimaging software were all given the same set of data and asked to analyse it as best they could. Interesting stuff, but as the contestants all used somewhat different methods to get the most out of the data, it’s hard to draw direct comparisons.

If there’s a moral to this story, it’s that (as the recent Gronenschild et al. paper demonstrates) we need to pay close attention to this kind of thing. As responsible researchers we cannot simply assume our results will be replicable with different hardware and software, and detailed reporting of not just the analysis procedures, but also the tools used to achieve the results seems a simple and robust way of at least acknowledging the issue and enabling more precise replicability. Actually solving the issues involved is a substantially more difficult problem, and may be a job for future generations of researchers and developers.

See also:
My previous post on comparisons of different fMRI software: Herehere and here.
Neuroskeptic has also written a short piece on the recent paper mentioned above.