Monthly Archives: June 2012
A lovely little paper has just got in press in Cognition – Gorillas we have missed: Sustained inattentional deafness for dynamic events, by a couple of ex-colleagues of mine at Royal Holloway – Polly Dalton and Nick Fraenkel. I thought I’d do a brief write-up, as it describes a couple of great experiments that involved some nifty bits of audio recording and editing; something I’ve been meaning to get around to writing on for some time.
The paper is based on an older, visual, effect described by Simons and Chabris (1999) (PDF here) and termed ‘inattentional blindness’. Essentially, this paper demonstrated that participants can fail to notice a highly salient visual stimulus, if their attention is directed towards some other aspect of the visual scene. The stimulus that these authors used was a video of six people passing basketballs to each other in a complex sequence, and the task for the participants was to count the number of passes made. During the movie, a person in a gorilla suit walked through the middle of the basketball players. Despite the bizarre nature of the manipulation, a substantial proportion of participants (between 30% and 50% depending on the exact condition) simply failed to notice the very obvious ‘gorilla in the midst’. You can see one of the videos used in the experiment below, and there’s also a nice interview with Daniel Simons where he talks about the experiment here.
So, what Polly and Nick did in their new paper is to take this visual effect, and cleverly translate it into the auditory domain. They made recordings of a complex auditory scene with two pairs of conversations happening at once – one pair of female voices and one pair of male voices – with both conversation pairs moving around the auditory ‘space’ during the recording. Also present during the recording was an additional (male) voice that walked through the scene repeatedly saying “I’m a gorilla, I’m a gorilla…” for 19 seconds.The majority of participants (90%) who were cued to listen to the male conversation did notice the ‘auditory gorilla’, however when people were cued to listen to the female conversation only 30% reported noticing the gorilla. The implication is that when we are attending to one category of stimulus (i.e. female voices) we can fail to notice even prominent stimuli which belong to an unattended category (male voices). You can try it yourself, using the below video, which contains an edited version of their stimulus. For the full effect you’ll need to use headphones:
This is clearly a complex auditory stimulus, and creating it involved some really interesting techniques. The recordings were made using an ‘artificial head’ – a (roughly) human-head-shaped recording device, with high-quality microphones positioned in each ear. Using such a device for binaural recordings is important, because the shape of the head (and the outer ear) produces subtle frequency-based distortions in perceived sounds, and the brain uses these cues to localise sounds in 3D space. The separate tracks from the two microphones form a single stereo track and when listened to on headphones, recordings of this type tend to produce a very natural-sounding audio environment. You can read more about this technique here. The two attended conversations were recorded separately from the “I’m a gorilla” stimulus, and the two recordings then mixed together to create the final stimulus – this enabled independent manipulation of the spatial placement of the gorilla stimulus within the scene (which was reversed in experiment 2).
This mixing of the two separate recordings was done using Reaper, a piece of software classed as a Digital Audio Workstation (DAW). DAW devices used to be primarily hardware based, and a digital audio lab used to include racks of equipment; samplers, sequencers etc. The vast majority of these functions can be reproduced with software nowadays. I haven’t used it myself, but Reaper looks to be a fantastic piece of professional-grade software, and is available very cheaply ($60 for an individual/educational licence). DAW software allows almost endless recording and editing possibilities for sound recordings, including studio-based recording of music, applying effects and filters, changing pitch and tempo, mixing and mastering of recordings, and even synthesis (e.g. of pure-tones, for use as auditory cues in experiments).
While Reaper looks great, my recommendation for this kind of software is Audacity, an incredibly full-featured, cross-platform (Windows, Mac and Linux), and entirely free audio editor/recorder. I’ve used Audacity a lot for really basic editing/synthesis tasks, but it has an impressive array of features and has (apparently) been used to record, mix and master entire albums. If you have some sound editing task to accomplish, it would definitely be worth investigating whether you can easily achieve it with Audacity before you splurge on some more expensive, professional software. A good list of other free sound-related software is here.
That’s all for now – happy sound editing! TTFN.
PS. For more details of the Royal Holloway attention lab’s research see their webpage here.
I’ve come across a couple of more web-links which I thought were important enough to share with you straight away rather than saving them up for a massive splurge of links.
The first is ViperLib, a site which focusses (geddit?) on visual perception and is run by Peter Thompson and Rob Stone of the University of York, with additional input (apparently) from Barry the snake. This is essentially a library of images and movies related to vision science, and currently contains a total of 1850 images – illusions, brain scans, anatomical diagrams, and much more. Registration is required to view the images, but it’s free and easily done, and I would encourage anyone to spend an hour or so of their time poking around amongst the treasures there. I shall be digging through my old hard drives when I get a chance and contributing some optic-flow stimuli from my old vision work to the database.
The second is for the (f)MRI types out there; a fantastic ‘Imaging Knowledge Base’ from the McGovern Institute for Brain Research at MIT. The page has a huge range of great information about fMRI design and analysis, from the basics of Matlab, to how to perform ROI analyses, and all presented in a very friendly, introductory format. If you’re just getting started with neuroimaging, this is one of the best resources I’ve seen for beginners.
With thanks to Nick Davis. I wonder if he codes better with his right or left hand?
A deeply exciting day for this blogger today, as I’m excited to put up my first guest post. After writing my earlier piece on why (psychology) students should learn to code I was interested in getting a current student’s perspective on the topic, and the delightful Hayley Thair was kind enough to write me a piece about her experience. I first met Hayley while she was working at the Science Museum on this project and she subsequently moved to Bangor to pursue a MSc in Clinical Neuropsychology. I hope this will help to further convince any other students who might be reading that it really is worthwhile putting a bit of time into learning a bit of coding. Here then is Hayley’s experiences of learning to program and what she feels she’s gained from it:
Something else to do with your PC…
Programming – yet another excuse I now have to spend even more time at my computer. Something that initially sounded rather scary, in a “I have no idea what I’m doing” kind of way, has become something incredibly useful that I am now confident in.I am currently completing my Masters in Clinical Neuropsychology and opted for a module called “practical programming.” Knowing that I have a huge research thesis to run and write up I figured knowing something about how to program would be invaluable! Unfortunately my thesis requires the use of Matlab, and the module taught me Visual Basic. However, I soon realised the fundamentals are the same and even if I couldn’t write Matlab code alone, I could certainly understand what was going on with the assistance of my supervisor.
I saw recently on the news that even primary school children are learning to code… this makes me hesitant to admit it was tricky to start with! However, once I learnt the basics I could design anything I wanted. Being short on ideas and running out of time to complete my mini-project I only managed to come up with a times-table game. It’s actually pretty cool, in a nerdy sort of way! I had two numbers being randomly generated to create the questions; a timer to make it more interesting; a scoring system so you can improve; and a fat robin as the loveable character to save!
Although what I made was simple, I felt a great sense of accomplishment in that I made and coded something from scratch without any help. This was a much greater feeling than anything I had at school in IT lessons. These, as far as I recall, were essentially “today let’s open Word.” I honestly can’t recall where I learnt my basic PC knowledge from, but it certainly wasn’t IT lessons at school. I think these lessons would be more engaging and fun if you were making something, like with programming. Being able to create something that’s yours and personalised would be far more entertaining than just being shown how to use something.
Either way, I’m glad I took the module as so many research assistant jobs ask that you be able to program. I think this puts me ahead of other applicants just because I’ll be able to design experiments and run them independently without needing someone else to come in and build my behavioural task for me.
What surprised me about programming, was that even though at first it was tricky, it suddenly became easy once you got the basics. Even if a piece of code doesn’t run (any programmer will be all too familiar with error messages!) you can continue to try to fix it and think of another way to word it. Essentially it’s all logic. You think what you want a button to do, and about how to break that down into simple step-by-step instructions, and weyhey it works! (Sometimes…) I like to think all those years of playing logic based games like Myst have finally proved useful! For people who enjoy learning something new, and constructing things it’s definitely worth a go. I didn’t find a textbook useful at all, but rather preferred viewing online YouTube tutorials for ideas once you have the basics. Visual Basic is free to download online and it easy to have a play around with as everything is clearly labelled, so I would suggest VB is a good starting point.
If you need another incentive, I’ve learnt to code to a confident level in just 10 lessons. It doesn’t take long to pick it up, and it’s now an invaluable skill that I can mention at interviews that (luckily for me) not everyone has!
I just came across a gosh-darn drop-dead cool (and free!) piece of software that I just had to write a quick post on. It’s called Tracker, it’s cross-platform, open-source and freely available here.
In a nutshell, it’s designed for analysis of videos, and can do various things, like track the motion of an object across frames (yielding position, velocity and acceleration data) and generate dynamic RGB colour profiles. Very cool. As an example of the kinds of things it can do, see this post on Wired.com where a physicist uses it to analyse the speed of blaster bolts in Star Wars: Episode IV. Super-geeky I know, but I love it.
Whenever I see a piece of software like this I immediately think about what I could use it for in psychology/neuroscience. In this case, I immediately thought about using it for kinematic analysis – that is, tracking the position/velocity/acceleration of the hand as it performed movements or manipulates objects. Another great application would be for analysis of movie stimuli for use in fMRI experiments. Complex and dynamic movies could be analysed in terms of the movement (or colour) stimuli they contain and measures produced which represent movement across time. Sub-sampled versions of these measures could then be entered into a fMRI-GLM analysis as parametric regressors to examine how the visual cortex responds; with careful selection of stimuli, this could be quite a neat and interesting experiment.
Not sure I’ll ever actually need to use it in my own experiments, but it looks like a really neat piece of software which could be a good solution for somebody with a relevant problem.
A fantastic image from 1910, drawn by a French postcard artist named Villemard, imagining the future of education in the year 2000. Wonderful stuff.
(Originally seen in this post on Wired.com on electronic textbooks.)
Akira O’Connor has just posted up a fascinating piece about his recent trial of an online experiment, and some of the data he’s gathered so far on the subject-base who’ve completed it. His earlier post about his experience of actually programming the experiment is definitely worth a read as well.
He also posted up a few links to other online experiment sites which I wasn’t aware of, and that I thought it was worth reproducing here. First up is this page from the Hanover College Psychology Department which lists hundreds of web-based experiments you can do as a subject. Right at the bottom of the page is a really useful section of links titled “Other Resources and List for Psychological Research on the Web” which is a really great list of resources in this area for researchers.
Next up is a UK-based site: Online Psychology Research, maintained by Dr Kathryn Gardner of the University of Central Lancashire. This is a similar kind of site, which lists links to current experiments in which one can participate, organised neatly into categories. The Online Research Resources page on this site is also a fantastic set of links to lots of relevant material.
Finally, this page on the Social Psychology Network also lists current online experiments, again, categorised by type/subject area.
I highly recommend having a browse through the experiments on some of these sites and completing the ones you’re interested in – you’ll be contributing to other’s research and you might learn something new as well!
I’ve recently been playing with a bit of software called FaceGen, and it’s basically awesome. As you might expect it’s a piece of 3D modelling software which is specialised in producing human face stimuli. You can either start off with a randomly-generated example, or upload your own (or someone else’s) picture, which the software can then extrapolate and model in 3D. The 3D model can then be modified to your heart’s content along various parameters – age, sex, race, emotional expression etc. etc. It really is an awesomely powerful piece of software, and pretty easy to use too, with the interface mostly based around a set of sliders for manipulating the various dimensions of the stimulus.
Here’s a brief video which gives an overview of some of the features:
The full version of FaceGen costs $299, but there is a free demo version that you can play with, available here.
Face stimuli have always posed a problem for researchers, and historically there have basically been two choices. The first is to use a standardised face-set such as the Ekman faces:
These stimuli have the benefit of being naturalistic, i.e. they are of real people, but several significant drawbacks. These face sets are often idiosyncratic in various ways and may not have all the facial expressions you might need, for all the picture subjects. In addition, they’re often not well-balanced in terms of race, age, sex etc. In particular the ‘classic’ Ekman face set is looking very dated and frankly, pretty ghastly, these days. Another more recent example would be the NimStim face set.
The second option is to use schematic or computer-generated faces such as these from this paper:
These have the big benefit of precise experimental controllability, but the obvious drawback that they aren’t very naturalistic at all.
The FaceGen software seems to offer the best of both worlds, in that you can create an almost infinite variety of precisely-produced images and easily control for confounding factors like age and race, while at the same time the stimuli it produces are pretty naturalistic – particularly so, if you import ‘real’ pictures and then modify them. I’m currently setting up an experiment which will use some face stimuli and I’m almost certainly going to use stimuli produced using FaceGen.
For more on faces in psychology research see my previous post on face-morphing. TTFN!