Category Archives: Hardware
3D printed response box. The future. It’s here.
This may not look like much, but it’s actually pretty cool. It’s a new five-button response-box being built for our MRI scanner by one of the technicians where I work. The cool thing is that the main chassis has been custom-designed, and then fabricated using a polyethylene-extruding 3D printer. The micro-switches for the four fingers and thumb and the wired connections for each have obviously been added afterwards.
3D printing in plastic is a great way of creating hardware for use in the MRI environment, as, well… it’s plastic, and it can create almost any kind of structure you can think of. We sometimes need to build custom bits of hardware for use in some of our experiments, and previously we’d usually build things by cutting plastic sheets or blocks to the required shape, and holding them together with plastic screws. Using a 3D-printer means we can produce solid objects which are much stronger and more robust, and they can be produced much more quickly and easily too. I love living in the future.
Some notes on the use of voice keys in reaction time experiments
Somebody asked me about using a voice key device the other day, and I realised it’s not something I’d ever addressed on here. A voice key is often used in experiments where you need to obtain a vocal response time, for instance in a vocal Stroop experiment, or a picture-naming task.
There are broadly two ways of doing this. The first is easy, but expensive, and not very good. The second is time-consuming, but cheap and very reliable.
The first method involves using a bit of dedicated hardware, essentially a microphone pre-amp, which detects the onset of a vocal response, and sends out a signal when it occurs. The Cedrus SV-1 device pictured above is a good example. This is easy, because you have all your vocal reaction times logged for you, but not totally reliable because you have to pre-set a loudness threshold for the box, and it might miss some responses, if the person just talks quietly, or there’s some unexpected background noise. It should be relatively simple to get whatever stimulus software you’re running to recognise the input from the device and log it as a response.
The other way is very simple to set up, in that you just plug a microphone into the sound card of your stimulus computer and record the vocal responses on each trial as .wav files. Stimulus software like PsychoPy can do this very easily. The downside to this is that you then have to take those sound files and examine them in some way in order to get the reaction time data out – this could mean literally examining the waveforms for each trial in a sound editor (such as Audacity), putting markers on the start of the speech manually, and calculating vocal RTs relative to the start of the file/trial. This is very reliable and precise, but obviously reasonably time-consuming. Manually putting markers on sound files is still the ‘gold standard’ for voice-onset reaction times. Ideally, you should get someone else to do this for you, so they’ll be ‘blind’ to which trials are which, and unbiased in calculating the reaction times. You can also possibly automate the process using a bit of software called SayWhen (paper here).
Which method is best depends largely on the number of trials you have in your experiment. The second method is definitely superior (and cheaper, easier to set up) but if you have eleventy-billion trials in your experiment, manually examining them all post hoc may not be very practical, and a more automatic solution might be worthwhile. If you were really clever you could try and do both at once – have two computers set up, the first running the stimulus program, and the second recording the voice responses, but also running a bit of code that signals the first computer when it detects a voice onset. Might be tricky to set up and get working, but once it was, you’d have all your RTs logged automatically on the first computer, plus the .wav files recorded on the second for post hoc analysis/data-cleaning/error-checking etc. if necessary.
Happy vocalising!
***UPDATE***
Two researchers have pointed out in the comments, that a system for automatically generating response times from sound-files already exists, called CheckVocal. It seems to be designed to work with the DMDX experimental programming system (free software that uses Microsoft’s DirectX system to present stimuli). Not sure if it’ll work with other systems or not, but worth looking at… Have also added the information to my Links page.
Psychology experiments enter the post-PC era: OpenSesame now runs on Android
I’ve mentioned OpenSesame briefly on here before, but for those of you who weren’t keeping up, it’s a pretty awesome, free psychology experiment-developing application, built using the Python programming language, and it has a lot in common with PsychoPy (which is also awesome).
The recently-released new version of OpenSesame has just taken an important step, in that it now supports the Android mobile operating system, meaning that it can run natively on Android tablets and smartphones. As far as I’m aware, this is the first time that a psychology-experimental application has been compiled (and released to the masses) for a mobile OS.
This is cool for lots of reasons. It’s an interesting technical achievement; Android is a very different implementation to a desktop OS, being focused heavily on touch interfaces. Such interfaces are now ubiquitous, and are much more accessible, in the sense that people who may struggle with a traditional mouse/keyboard can use them relatively easily. Running psychology experiments on touch-tablets may enable the study of populations (e.g., the very young, very old, or various patient groups) that would be very difficult with a more ‘traditional’ system. Similarly, conducting ‘field’ studies might be much more effective; I can imagine handing a participant a tablet for them to complete some kind of task in the street, or in a shopping mall, for instance. Also, it may open up the possibility of using the variety of sensors in modern mobile devices (light, proximity, accelerometers, magnetometers) in interesting and creative ways. Finally, the hardware is relatively cheap, and (of course) portable.
I’m itching to try this out, but unfortunately don’t have an Android tablet. I love my iPad mini for lots of reasons, but the more restricted nature of Apple’s OS means that it’s unlikely we’ll see a similar system on iOS anytime soon.
So, very exciting times. Here’s a brief demo video of OpenSesame running on a Google Nexus 7 tablet (in the demo the tablet is actually running a version of Ubuntu Linux, but with the new version of OpenSesame it shouldn’t be necessary to replace the Android OS). Let me know in the comments if you have any experience with tablet-experiments, or if you can think of any other creative ways they could be used.
TTFN.
Another miscellaneous grab-bag of goodies, links ‘n’ stuff
In lieu of a ‘proper’ post (forgive me, dear readers, the vicious task-masters at my proper job have been wielding the whip with particular alacrity recently) I’m putting together a list of links to cool things that I’ve come across lately.
So, in no particular order:
Tal Yarkoni’s outstanding Neurosynth website has now gone modular and open-source, meaning you can embed the code for the brain-image viewer into any website, and use it to present your own data – this is seriously cool. Check out his blog-post for the details.
An interesting little comment on “Why Google isn’t good enough for academic search”. Google scholar tends to be my first port of call these days, but the points made in this discussion are pretty much bang-on.
A fantastic PNAS paper by Kosinski et al. (2013; PDF) that demonstrates that personal attributes such as sexual orientation, ethnicity, religious and political views, some aspects of personality, intelligence and many others, can be automatically and accurately (to a fairly startling degree, actually) predicted merely from analysis of Facebook ‘Likes’. A fantastic result, that really demonstrates the value of doing research using online data.
Next up is Google Refine – an interesting little idea from Google intended to assist with cleaning up and re-formatting messy data. Looks like it could be promisingly useful.
A really seriously great website on the stats language R, designed to make the transition for SPSS and SAS users as easy as possible – very clear, very nicely explained. Beautiful stuff.
Another cool website called citethisforme.com; you fill in fields (author, title, etc.) for sources you wish to cite, and it creates a perfectly formatted bibliography for you in the style (APA, Harvard etc.) you choose. A cool idea, but in practice, filling out the fields would be incredibly tedious for anything more than a few sources. Good place to learn about how to format things for different types of reference though.
I’ve previously written about the use of U-HID boards for building USB response devices; I’ve just been made aware of a similar product called Labjack, which looks even more powerful and flexible. A Labjack package is included in the standard distribution of PsychoPy too, which is cool. I’m becoming more and more a fan of PsychoPy by the way – I’m now using it on a couple of projects, and it’s working very well indeed for me.
Now a trio of mobile apps to check out. Reference ME is available for both iOS and Android, and creates a citation in a specific style (Harvard, APA, etc.) when you scan the barcode of a book – very handy! The citations can then be emailed to you for pasting into essays or whatever.
The Great Brain Experiment is a free app from the Wellcome Trust (download links for both iOS and Android here) created in collaboration with UCL. The aim is to crowdsource a massive database on memory, impulsivity, risk-taking and other things. Give it a whirl – it’s free!
Lastly Codea is a very cool-looking iPad-only app that uses the Lua programming language to enable the (relatively) easy development and deployment of ‘proper’ code, entirely on the iPad. Very cool – Wired called it ‘the Garage Band of coding’, and while it’s probably not quite that easy to use, it’s definitely worth checking out if you want to use your iPad as a serious development tool.
If you’re still hungry for more internet goodies, I encourage you most heartily to check out my Links page, which is currently in an ongoing phase of rolling development (meaning, whenever I find something cool, I put it up there).
TTFN.
Tablet computers (iPad, Nexus 7, etc.) for children with developmental disorders
A very minimal post merely to point any interested readers towards an interesting discussion going on in the comments section of a post on Engadget here. A reader asked for suggestions for a tablet and/or apps for his developmentally-delayed daughter, and a large number of people have contributed some useful ideas and links. Just try to ignore the (inevitable *sigh*) Android vs. iOS fan-boy squabbling.
Seriously cool toys – Tobii mobile eye-tracking glasses, Pivothead HD video-recording eye-wear, and the Affectiva Q-sensor
The other day I was lucky enough to be able to help out with a bit of data-collection in a well-known London department store, being run by the magnificent Tim Holmes of Acuity Intelligence. This meant that I got to examine some seriously cool bits of new hardware – and new gadgets (especially scientific ones) are basically my kryptonite, so it was generally pretty exciting.
The first thing we used was a mobile eye-tracking system designed and built by Tobii. These have two cameras in – one front-facing to record video of what the participant is looking at, and another infra-red camera to record the participant’s exact eye-position. They can also capture sound in real-time too, and record the eye-tracking data at 30Hz. The system comes with a clip-on box where the data is actually recorded (in the background of the picture on the right) and which is also used for the (fairly brief and painless) initial calibration. It seems like a really great system – the glasses are very light, comfortable and unobtrusive – and could have a really broad range of applications for research, both scientific and marketing-related.
The next cool toy I got to play with was a pair of these:
These are glasses with a camera lens in the centre of the frame (between the eye-lenses) which can record full high-definition video – full 1080p at 30 fps, using an 8Mp sensor. Amazing! They have an 8GB onboard memory which is good for about an hour of recording time, and also have a couple of discreet buttons on the top of the right arm which can be used for taking still pictures in 5-picture burst or 6-picture time-lapse mode. They’re made by a company called Pivothead, and seem to be more intended for casual/recreational/sports use rather than as a research technology (hence the ‘cool’ styling). They’re a reasonably bulky pair of specs, but very light and comfortable, and I don’t think you’d attract much attention filming with them. It’s worth checking out the videos page at their website for examples of what they can do. They’re also only $349 – a lot for a pair of sunglasses, but if you can think of a good use for them, that seems like a snip. If you’re in the UK, they’re also available direct from the Acuity Intelligence website for £299, inc. VAT. I wonder how long it’ll be before they start showing up in law-enforcement/military situations?
The third device I got to geek-out over was one of these little beauties:
This is a ‘Q-Sensor’, made by a company called Affectiva and is about the size of an averagely chunky wristwatch. It has two little dry-contact electrodes on the back which make contact with the skin on the underside of the wrist, and also contains a 3-axis accelerometer and a temperature sensor. This little baby claims to be able to log skin conductance data (plus data from the other sensors) for 24 hours straight on a single charge, and will even stream the data ‘live’ via Bluetooth to a connected system for on-the-fly analysis. It seems like Affectiva are mainly pitching it as a market research tool, but I can think of a few good ‘proper’ research ideas that this would enable as well. This is seriously cool technology.
That’s all folks – TTFN.
Website of the week: Cogsci.nl. OpenSesame, illusions, online experiments, and more.
Dec 14
Posted by Matt Wall
A quick post to point you towards a great website with a lot of really cool content (if you’re into that kind of thing, which if you’re reading this blog, then I assume you probably are… anyway, I digress; I apologise, it was my lab’s Christmas party last night and I’m in a somewhat rambling mood. Anyway, back to the point).
So, the website is called cogsci.nl, and is run by a post-doc at the University of Aix-Marseille called Sebastiaan Mathôt. It’s notable in that it’s the homepage of OpenSesame – a very nice-looking, Python-based graphical experiment builder that I’ve mentioned before on these very pages. There’s a lot of other cool stuff on the site though, including more software (featuring a really cool online tool for instantly creating Gabor patch stimuli), a list of links to stimulus sets, and a selection of really-cool optical illusions. Really worth spending 20 minutes of your time poking around a little and seeing what’s there.
I’ll leave you with a video of Sebastiaan demonstrating an experimental program, written in his OpenSesame system, running on a Google Nexus 7 Tablet (using Ubuntu linux as an OS). The future! It’s here!
Posted in Commentary, Cool new tech, Experimental techniques, Hardware, Internet, Programming, Software
Leave a comment
Tags: Cogsci.nl, Experiments, illusion, OpenSesame, programming, psychology, PsychoPy, python, research, Software, Ubuntu, visual illusion