Monthly Archives: June 2012

Inattentional deafness, sound editing and auditory gorillas

A lovely little paper has just got in press in Cognition – Gorillas we have missed: Sustained inattentional deafness for dynamic events, by a couple of ex-colleagues of mine at Royal Holloway – Polly Dalton and Nick Fraenkel. I thought I’d do a brief write-up, as it describes a couple of great experiments that involved some nifty bits of audio recording and editing; something I’ve been meaning to get around to writing on for some time.

The paper is based on an older, visual, effect described by Simons and Chabris (1999) (PDF here) and termed ‘inattentional blindness’. Essentially, this paper demonstrated that participants can fail to notice a highly salient visual stimulus, if their attention is directed towards some other aspect of the visual scene. The stimulus that these authors used was a video of six people passing basketballs to each other in a complex sequence, and the task for the participants was to count the number of passes made. During the movie, a person in a gorilla suit walked through the middle of the basketball players. Despite the bizarre nature of the manipulation, a substantial proportion of participants (between 30% and 50% depending on the exact condition) simply failed to notice the very obvious ‘gorilla in the midst’. You can see one of the videos used in the experiment below, and there’s also a nice interview with Daniel Simons where he talks about the experiment here.

Schematic of the auditory stimulus used in the experiment, reproduced from Figure 1 of Dalton and Fraenkel (2012).

So, what Polly and Nick did in their new paper is to take this visual effect, and cleverly translate it into the auditory domain. They made recordings of a complex auditory scene with two pairs of conversations happening at once – one pair of female voices and one pair of male voices – with both conversation pairs moving around the auditory ‘space’ during the recording. Also present during the recording was an additional (male) voice that walked through the scene repeatedly saying “I’m a gorilla, I’m a gorilla…” for 19 seconds.The majority of participants (90%) who were cued to listen to the male conversation did notice the ‘auditory gorilla’, however when people were cued to listen to the female conversation only 30% reported noticing the gorilla. The implication is that when we are attending to one category of stimulus (i.e. female voices) we can fail to notice even prominent stimuli which belong to an unattended category (male voices). You can try it yourself, using the below video, which contains an edited version of their stimulus. For the full effect you’ll need to use headphones:

This is clearly a complex auditory stimulus, and creating it involved some really interesting techniques. The recordings were made using an ‘artificial head’ – a (roughly) human-head-shaped recording device, with high-quality microphones positioned in each ear. Using such a device for binaural recordings is important, because the shape of the head (and the outer ear) produces subtle frequency-based distortions in perceived sounds, and the brain uses these cues to localise sounds in 3D space. The separate tracks from the two microphones form a single stereo track and when listened to on headphones, recordings of this type tend to produce a very natural-sounding audio environment. You can read more about this technique here. The two attended conversations were recorded separately from the “I’m a gorilla” stimulus, and the two recordings then mixed together to create the final stimulus – this enabled independent manipulation of the spatial placement of the gorilla stimulus within the scene (which was reversed in experiment 2).

This mixing of the two separate recordings was done using Reaper, a piece of software classed as a Digital Audio Workstation (DAW). DAW devices used to be primarily hardware based, and a digital audio lab used to include racks of equipment; samplers, sequencers etc. The vast majority of these functions can be reproduced with software nowadays. I haven’t used it myself, but Reaper looks to be a fantastic piece of professional-grade software, and is available very cheaply ($60 for an individual/educational licence). DAW software allows almost endless recording and editing possibilities for sound recordings, including studio-based recording of music, applying effects and filters, changing pitch and tempo, mixing and mastering of recordings, and even synthesis (e.g. of pure-tones, for use as auditory cues in experiments).

While Reaper looks great, my recommendation for this kind of software is Audacity, an incredibly full-featured, cross-platform (Windows, Mac and Linux), and entirely free audio editor/recorder. I’ve used Audacity a lot for really basic editing/synthesis tasks, but it has an impressive array of features and has (apparently) been used to record, mix and master entire albums. If you have some sound editing task to accomplish, it would definitely be worth investigating whether you can easily achieve it with Audacity before you splurge on some more expensive, professional software. A good list of other free sound-related software is here.

That’s all for now – happy sound editing! TTFN.

PS. For more details of the Royal Holloway attention lab’s research see their webpage here.

Two like, *totes* awesome websites: ViperLib and mindhive

I’ve come across a couple of more web-links which I thought were important enough to share with you straight away rather than saving them up for a massive splurge of links.

The first is ViperLib, a site which focusses (geddit?) on visual perception and is run by Peter Thompson and Rob Stone of the University of York, with additional input (apparently) from Barry the snake. This is essentially a library of images and movies related to vision science, and currently contains a total of 1850 images – illusions, brain scans, anatomical diagrams, and much more. Registration is required to view the images, but it’s free and easily done, and I would encourage anyone to spend an hour or so of their time poking around amongst the treasures there. I shall be digging through my old hard drives when I get a chance and contributing some optic-flow stimuli from my old vision work to the database.

The second is for the (f)MRI types out there; a fantastic ‘Imaging Knowledge Base’ from the McGovern Institute for Brain Research at MIT. The page has a huge range of great information about fMRI design and analysis, from the basics of Matlab, to how to perform ROI analyses, and all presented in a very friendly, introductory format. If you’re just getting started with neuroimaging, this is one of the best resources I’ve seen for beginners.

Hello. My name is Inigo Montoya. You killed my application. Prepare to die.

20120622-095621.jpg

With thanks to Nick Davis. I wonder if he codes better with his right or left hand?

A review of social science research using Facebook

A quick-ish post just to point you towards a fascinating review published last month in Perspectives in Psychological Science: Wilson, Gosling and Graham (2012) A review of Facebook research in the social sciences. These authors review a set of 412 (!) studies that have been published, all since Facebook was launched in 2004. One of the striking figures in their review is this one, which highlights both the meteoric increase in Facebook users (currently over 800 million) and the parallel growth in research papers which have used Facebook as a means to gather data:

Figure 1 from Wilson, Gosling and Graham (2012). People like using Facebook, and researchers apparently *really* like people who like using Facebook.

The 412 research reports were divided into five broad-ish categories, in terms of their aims:

1. Who is using Facebook?
2. Why do people use Facebook?
3. How are people presenting themselves on Facebook?
4. How is Facebook affecting relationships among groups and individuals?
5. Why are people disclosing information on Facebook despite potential risks?

The authors suggest that, as well as just being a descriptive characterisation of the literature, these five central questions might serve as a common framework for future research in other online social networks, especially research which seeks to compare patterns of usage across two or more networks. Seems reasonable.

Also of interest (to me, anyway) is Appendix B which details the major data collection methods used by the studies, and also discusses some ethical considerations. It notes that some researchers have built custom applications for Facebook in order to collect data, but that these applications are not always successful in attracting a large user-base, i.e. some ‘go viral’ and some do not. This seems like an opportunity to do some interesting ‘meta’-research; a study of which research-driven applications are successful, and which aren’t!

Online social networks are an important part of many people’s social lives nowadays, and it seems unlikely that their influence has even come close to peaking yet; we can only expect that take-up and usage of these social tools will carry on increasing (and perhaps even accelerating) for some time. It’s good to see that social scientists have embraced these new ways that we all interact and are making serious efforts to describe and evaluate them.

TTFN.

The effects of hardware, software, and operating system on brain imaging results

A recent paper (Gronenschild et al., 2012) has caused a modicum of concern amongst neuroimaging researchers. The paper documents a set of results based on analysis of anatomical MRI images using a popular free software tool called FreeSurfer, and essentially reports that there are (sometimes quite substantive) differences in the results that it produces, depending on the exact version of the software used, and whether the analyses were carried out on a Mac (running OS X) or a Hewlett Packard PC (running Linux). In fact, even the exact version of OS X on the Mac systems was also shown to be important in replicating results precisely.

Figure 3 of Gronenschild et al. (2012) showing the effect of different versions of FreeSurfer on obtained grey-matter volume results. Percentage scale at the top, p-values on the bottom.

The fact that results differ from one version of FreeSurfer to another is perhaps not so surprising – after all, we expect that newer versions of software should be ‘improved’ in important ways, otherwise, what would be the point in releasing them? However, the fact that results differ between operating systems is a little more worrying – in theory any operating system capable of running the software should produce the same result. The authors recommendations are that 1) Researchers should not switch from one version/operating system/platform to another in the middle of a research project, and 2) that when reporting results software version numbers, and the workstation/OS used should all be documented. This seems broadly sensible.

It got me thinking about neuroimaging software more generally as well though. In general, people don’t do detailed evaluations of software of the kind reported by Gronenschild et al. (2012).  As an enthusiastic user of several fMRI-related packages (I’m currently using SPM, FSL and BrainVoyager, all on different projects) I’ve often wondered what the real differences were between them, in terms of the results they produce. Given how many people around the world use brain imaging software, you might think that some detailed evaluations would be floating around, but in fact there are very few.

I think there are several reasons for this:

1. It’s (perhaps understandably) regarded as a waste of time. After all, we (meaning researchers who use this software) are generally more interested in how the brain works, than by how software works. Neuroimaging is difficult and time-consuming and we all need to publish papers to survive – it makes more sense to spend our time on ‘real’ brain-related research.

2. Most people have one (or at most two) pieces of software that they like to use for neuroimaging, and they stick with it; I’m somewhat unusual in this respect. The fact that most people use just one package more-or-less exclusively means there’s a dearth of people who actually have the skills necessary to do cross-evaluation of packages. Again, this is understandable – why take the time to learn a new system, if you’re happy with the one you’re using?

3. The differences between the packages make precise comparison of end-results difficult. Even though all the packages use an application of the General Linear Model for basic analysis, other differences in pre-processing conceivably play a role. For instance, FSL handles the spatial transformation of functional data somewhat differently to other packages.

Having said that, there have been a few papers which have tried to do these kind of evaluations. Two examples are here (on motion correction) and here (on segmentation). Another somewhat instructive paper is this one, which summarises the results of a functional-imaging analysis contest held as part of the Human Brain Mapping meeting in Toronto in 2005; developers of popular neuroimaging software were all given the same set of data and asked to analyse it as best they could. Interesting stuff, but as the contestants all used somewhat different methods to get the most out of the data, it’s hard to draw direct comparisons.

If there’s a moral to this story, it’s that (as the recent Gronenschild et al. paper demonstrates) we need to pay close attention to this kind of thing. As responsible researchers we cannot simply assume our results will be replicable with different hardware and software, and detailed reporting of not just the analysis procedures, but also the tools used to achieve the results seems a simple and robust way of at least acknowledging the issue and enabling more precise replicability. Actually solving the issues involved is a substantially more difficult problem, and may be a job for future generations of researchers and developers.

See also:
My previous post on comparisons of different fMRI software: Herehere and here.
Neuroskeptic has also written a short piece on the recent paper mentioned above.

TTFN.

Guest post by Hayley Thair: A student’s perspective on learning to program

Hayley, hard at work on her programming project.

deeply exciting day for this blogger today, as I’m excited to put up my first guest post. After writing my earlier piece on why (psychology) students should learn to code  I was interested in getting a current student’s perspective on the topic, and the delightful Hayley Thair was kind enough to write me a piece about her experience. I first met Hayley while she was working at the Science Museum on this project and she subsequently moved to Bangor to pursue a MSc in Clinical Neuropsychology. I hope this will help to further convince any other students who might be reading that it really is worthwhile putting a bit of time into learning a bit of coding. Here then is Hayley’s experiences of learning to program and what she feels she’s gained from it:

Something else to do with your PC…

Programming – yet another excuse I now have to spend even more time at my computer. Something that initially sounded rather scary, in a “I have no idea what I’m doing” kind of way, has become something incredibly useful that I am now confident in.I am currently completing my Masters in Clinical Neuropsychology and opted for a module called “practical programming.” Knowing that I have a huge research thesis to run and write up I figured knowing something about how to program would be invaluable! Unfortunately my thesis requires the use of Matlab, and the module taught me Visual Basic. However, I soon realised the fundamentals are the same and even if I couldn’t write Matlab code alone, I could certainly understand what was going on with the assistance of my supervisor.

I saw recently on the news that even primary school children are learning to code… this makes me hesitant to admit it was tricky to start with! However, once I learnt the basics I could design anything I wanted. Being short on ideas and running out of time to complete my mini-project I only managed to come up with a times-table game. It’s actually pretty cool, in a nerdy sort of way! I had two numbers being randomly generated to create the questions; a timer to make it more interesting; a scoring system so you can improve; and a fat robin as the loveable character to save!

Although what I made was simple, I felt a great sense of accomplishment in that I made and coded something from scratch without any help. This was a much greater feeling than anything I had at school in IT lessons. These, as far as I recall, were essentially “today let’s open Word.” I honestly can’t recall where I learnt my basic PC knowledge from, but it certainly wasn’t IT lessons at school. I think these lessons would be more engaging and fun if you were making something, like with programming. Being able to create something that’s yours and personalised would be far more entertaining than just being shown how to use something.

Either way, I’m glad I took the module as so many research assistant jobs ask that you be able to program. I think this puts me ahead of other applicants just because I’ll be able to design experiments and run them independently without needing someone else to come in and build my behavioural task for me.

What surprised me about programming, was that even though at first it was tricky, it suddenly became easy once you got the basics. Even if a piece of code doesn’t run (any programmer will be all too familiar with error messages!) you can continue to try to fix it and think of another way to word it. Essentially it’s all logic. You think what you want a button to do, and about how to break that down into simple step-by-step instructions, and weyhey it works! (Sometimes…) I like to think all those years of playing logic based games like Myst have finally proved useful! For people who enjoy learning something new, and constructing things it’s definitely worth a go. I didn’t find a textbook useful at all, but rather preferred viewing online YouTube tutorials for ideas once you have the basics. Visual Basic is free to download online and it easy to have a play around with as everything is clearly labelled, so I would suggest VB is a good starting point.

If you need another incentive, I’ve learnt to code to a confident level in just 10 lessons. It doesn’t take long to pick it up, and it’s now an invaluable skill that I can mention at interviews that (luckily for me) not everyone has!

Video analysis software – Tracker

I just came across a gosh-darn drop-dead cool (and free!) piece of software that I just had to write a quick post on. It’s called Tracker, it’s cross-platform, open-source and freely available here.

In a nutshell, it’s designed for analysis of videos, and can do various things, like track the motion of an object across frames (yielding position, velocity and acceleration data) and generate dynamic RGB colour profiles. Very cool. As an example of the kinds of things it can do, see this post on Wired.com where a physicist uses it to analyse the speed of blaster bolts in Star Wars: Episode IV. Super-geeky I know, but I love it.

An example of some motion analyses conducted using Tracker

Whenever I see a piece of software like this I immediately think about what I could use it for in psychology/neuroscience. In this case, I immediately thought about using it for kinematic analysis – that is, tracking the position/velocity/acceleration of the hand as it performed movements or manipulates objects. Another great application would be for analysis of movie stimuli for use in fMRI experiments. Complex and dynamic movies could be analysed in terms of the movement (or colour) stimuli they contain and measures produced which represent movement across time. Sub-sampled versions of these measures could then be entered into a fMRI-GLM analysis as parametric regressors to examine how the visual cortex responds; with careful selection of stimuli, this could be quite a neat and interesting experiment.

Not sure I’ll ever actually need to use it in my own experiments, but it looks like a really neat piece of software which could be a good solution for somebody with a relevant problem.

TTFN.

A view of electronic learning from 1910.

A fantastic image from 1910, drawn by a French postcard artist named Villemard, imagining the future of education in the year 2000. Wonderful stuff.

(Originally seen in this post on Wired.com on electronic textbooks.)

Online psychology experiments – some useful links

Akira O’Connor has just posted up a fascinating piece about his recent trial of an online experiment, and some of the data he’s gathered so far on the subject-base who’ve completed it. His earlier post about his experience of actually programming the experiment is definitely worth a read as well.

He also posted up a few links to other online experiment sites which I wasn’t aware of, and that I thought it was worth reproducing here. First up is this page from the Hanover College Psychology Department which lists hundreds of web-based experiments you can do as a subject. Right at the bottom of the page is a really useful section of links titled “Other Resources and List for Psychological Research on the Web” which is a really great list of resources in this area for researchers.

Next up is a UK-based site: Online Psychology Research, maintained by Dr Kathryn Gardner of the University of Central Lancashire. This is a similar kind of site, which lists links to current experiments in which one can participate, organised neatly into categories. The Online Research Resources page on this site is also a fantastic set of links to lots of relevant material.

Finally, this page on the Social Psychology Network also lists current online experiments, again, categorised by type/subject area.

I highly recommend having a browse through the experiments on some of these sites and completing the ones you’re interested in – you’ll be contributing to other’s research and you might learn something new as well!

TTFN.

FaceGen – 3D modelling software for faces

I’ve recently been playing with a bit of software called FaceGen, and it’s basically awesome. As you might expect it’s a piece of 3D modelling software which is specialised in producing human face stimuli. You can either start off with a randomly-generated example, or upload your own (or someone else’s) picture, which the software can then extrapolate and model in 3D. The 3D model can then be modified to your heart’s content along various parameters – age, sex, race, emotional expression etc. etc. It really is an awesomely powerful piece of software, and pretty easy to use too, with the interface mostly based around a set of sliders for manipulating the various dimensions of the stimulus.

Here’s a brief video which gives an overview of some of the features:

The full version of FaceGen costs $299, but there is a free demo version that you can play with, available here.

Face stimuli have always posed a problem for researchers, and historically there have basically been two choices. The first is to use a standardised face-set such as the Ekman faces:

These stimuli have the benefit of being naturalistic, i.e. they are of real people, but several significant drawbacks. These face sets are often idiosyncratic in various ways and may not have all the facial expressions you might need, for all the picture subjects. In addition, they’re often not well-balanced in terms of race, age, sex etc. In particular the ‘classic’ Ekman face set is looking very dated and frankly, pretty ghastly, these days. Another more recent example would be the NimStim face set.

The second option is to use schematic or computer-generated faces such as these from this paper:

These have the big benefit of precise experimental controllability, but the obvious drawback that they aren’t very naturalistic at all. 

The FaceGen software seems to offer the best of both worlds, in that you can create an almost infinite variety of precisely-produced images and easily control for confounding factors like age and race, while at the same time the stimuli it produces are pretty naturalistic – particularly so, if you import ‘real’ pictures and then modify them. I’m currently setting up an experiment which will use some face stimuli and I’m almost certainly going to use stimuli produced using FaceGen.

For more on faces in psychology research see my previous post on face-morphing. TTFN!