I just came across a gosh-darn drop-dead cool (and free!) piece of software that I just had to write a quick post on. It’s called Tracker, it’s cross-platform, open-source and freely available here.
In a nutshell, it’s designed for analysis of videos, and can do various things, like track the motion of an object across frames (yielding position, velocity and acceleration data) and generate dynamic RGB colour profiles. Very cool. As an example of the kinds of things it can do, see this post on Wired.com where a physicist uses it to analyse the speed of blaster bolts in Star Wars: Episode IV. Super-geeky I know, but I love it.
Whenever I see a piece of software like this I immediately think about what I could use it for in psychology/neuroscience. In this case, I immediately thought about using it for kinematic analysis – that is, tracking the position/velocity/acceleration of the hand as it performed movements or manipulates objects. Another great application would be for analysis of movie stimuli for use in fMRI experiments. Complex and dynamic movies could be analysed in terms of the movement (or colour) stimuli they contain and measures produced which represent movement across time. Sub-sampled versions of these measures could then be entered into a fMRI-GLM analysis as parametric regressors to examine how the visual cortex responds; with careful selection of stimuli, this could be quite a neat and interesting experiment.
Not sure I’ll ever actually need to use it in my own experiments, but it looks like a really neat piece of software which could be a good solution for somebody with a relevant problem.
I’ve been meaning to write a new post which would be an update to my previous one on good psychology-related iPhone/iPad apps for a while now, but I just came across one app which is just too good not to share immediately. It’s a free app called RFSpotter, written by Nicolas Cottaris of the IRCS and Dept. of Psychology at The University of Pennsylvania, and it generates simple visual psychophysics stimuli for use in mapping receptive fields and the tuning properties thereof. It has a very slick interface, where stimulus size, position and rotation can all be controlled by the usual iOS finger-gestures (e.g. pinch-to-zoom to change stimulus size, two-finger rotation for orientation) with many other parameters editable through a pop-up menu. It will do gratings, patches, dot-clouds, coloured stimuli – all kinds of things! Very, very neat indeed.
The iPad really has the potential to be a serious platform for research, and it’s tools like this that will make it possible to do some really interesting work with it – here’s hoping we see many more specialist, research-oriented apps like this in the future!
As an example of the ways in which technology and psychology have developed together recently, I thought it would be fun to do a little case-study of a particular area of research which has benefitted from advances in computer software over recent years. Rather than talk about the very technical disciplines like brain imaging (which have of course advanced enormously recently) I thought it would be more fun to concentrate on an area of relatively ‘pure’ psychology, and one of the most important and fundamental cognitive processes which is present pretty much from birth; face perception.
In November 1991 Michael Jackson released the single ‘Black or White’; the first to be released from his eighth album ‘Dangerous’. The single is of some significance as it marked the beginning of Jackson’s descent from the firmament of music stardom into the spiral of musical mediocrity and personal weirdness which only ended with his death in 2009, but for the purposes of the present discussion it was interesting because of part of its accompanying video. Towards the end of the video a series of people of both sexes and of various ethnic groups are shown singing along with the song and the images of their faces morph into each other in series: