Category Archives: Commentary
I’ve just come across two outstanding tutorial videos over on xiph.org - an open-source organisation dedicated to developing multimedia protocols and tools. So, the first one covers the fundamental principles of digital sampling for audio and video, and discusses sampling rates, bit depth and lots of other fun stuff – if you’ve ever wondered what a 16-bit, 128kbps mp3 is, this is for you.
The second one focusses on audio and gets on to some more advanced topics, about how audio behaves in the real world.
They’re both fairly long (30 mins and 23 mins respectively) but well worth watching. If you’re just getting started with digital audio and/or video editing and production, these could be really useful.
So-called brain-training tools seem to have exploded in the last few years; one estimate puts it at a $6 billion market by 2020. It’s clearly become a major industry, but what’s less clear is exactly what it does, and if it even works. The typical procedure seems to be to engage in short games, puzzles and working-memory-type tasks, and these are supposed to produce long term changes in attention, engagement and general fluid intelligence.
Whether this is actually true or not is a matter of some debate. I’m not a specialist in this area, but the received wisdom appears to be that training on specific tasks does improve performance – on those tasks. There seems to be little generalisation to other tasks, and even less to domain-general abilities like executive processing, or working memory. A high-profile study by Adrian Owen and colleagues (2010) reported exactly that – benefits in the tasks themselves, but little (if any) general benefits. A previous study from PNAS in 2008 does seem to contradict this, and reports an increase in fluid intelligence as a result of working-memory training – not only that, but they claim a dose-dependent effect, that is, more training = more increase in intelligence. The gains in that study were relatively small, and it should be also noted that the control group also apparently increased their intelligence somewhat over the same period as the experimental group – curious. There are lots of other studies around, but many have issues; small samples, poorly-controlled etc. etc.
So, the jury’s still very much out (though personally, I’m on the side of the skeptics on the issue). This hasn’t stopped a bewildering array of businesses starting up, making all kinds of wild claims, and playing on the fears of educators and parents that perhaps if they don’t provide these kinds of programs, their kids will be slipping behind the rest. All these companies have glossy, highly-polished, ethnically-balanced websites with testimonials, and lots of links to science-y looking videos that present their program as the only scientifically-proven method of increasing your child’s intelligence. A brief browse through some of these companies websites reveals that they range from the absurd (QDreams! Success at the speed of thought!) to the very, very slick indeed (e.g. Lumosity). Other examples are Cogmed (seems to be backed by Pearson publishers and, to its credit, links to a list of semi-relevant research papers), and the very simplistic PowerBrain Education - which seems to involve getting kids to do some odd-looking arm-shaking exercises. There’s literally hundreds of these companies. Some of them even seem to cater to businesses who want their employees to do these ‘exercises’.
LearningRX definitely falls into the slick category. According to this New York Times article it has 83 physical store-front franchises across the USA, where people can come to pay $80-90 an hour for one-on-one training, and they market this to parents as an alternative to traditional tutoring. A quick glance at their Scientific Advisory Board is pretty revealing – I count only one (clinical) psychologist, and a grab-bag of other professionals – mostly teachers (qualified to Masters level) with an optometrist, a chemical engineer and an audiologist. Not a single neuroscientist, and only a few qualified at doctorate level.
I’m not trying to be unnecessarily snobby about their qualifications here, I’m suggesting that the claims they make for their brain-training programs (literally: it will change your child’s life) are big ones, and we might expect that the people who developed it might be qualified in some area of brain-science. If it really, clearly worked, then of course it wouldn’t matter exactly who developed it, and what their qualifications were, but there’s definitely reasonable doubt (if not outright disbelief) over its effectiveness.
And this is the important point. People are spending money on this - big money. Whether that’s a hard-pressed family struggling to find an extra $90 a week for their kid to have a session at one of LearningRX’s centres, or an education board deciding to institute one of these programs in its schools. Education budgets are tight enough, but these kinds of programs are being heavily invested in, and I can see why – they promise to make kids smarter, better-behaved, more attentive, and all you have to do is sit them in front of a special computer game for an hour a week. That must seem like a pretty attractive proposition for teachers. Unfortunately, if they really don’t work, then that money could be better spent on books, or musical instruments, or something else which might genuinely enrich the kids’ lives.
There’s a long and venerable history of unscrupulous people making money from pseudo-neuroscience – back in the 19th Century phrenology was described as “The science of picking someone’s pocket, through their skull.” I’d like to believe that some of these companies have a solid product that actually made a difference, but they all seem to have the whiff of snake-oil about them. For now I’m very much of the opinion that you’d probably be better off learning the piano, or Japanese, or even playing the latest Call of Duty. If you were really ambitious you could even try and get your kid to (Heaven forfend!) read the odd book now and again.
I put that last sentence that mentions Call of Duty in there as a bit of flippancy, but I’ve since been informed (by Micah Allen on Twitter) of some evidence that playing action video games can indeed improve some cognitive processes such as the accuracy of visuo-spatial attention and reaction times. These results mostly originate from a single lab and so are in need of replication, but still – interesting. (I still reckon you’re probably better off with a good book though.)
Forensic and criminal psychology are somewhat odd disciplines; they sit at the cross-roads between abnormal psychology, law, criminology, and sociology. Students seem to love forensic psychology courses, and the number of books, movies, and TV shows which feature psychologists cooperating with police (usually in some kind of offender-profiling manner) attests to the fascination that the general public have for it too. Within hours of the Newtown, CT shooting spree last December, ‘expert’ psychologists were being recruited by the news media to deliver soundbites attesting to the probable mental state of the perpetrator. Whether this kind of armchair diagnosis is appropriate or useful (hint: it’s really not), it’s a testament to the acceptance of such ideas within society at large.
Back in the late 80s and early 90s there were two opposing approaches to offender profiling, rather neatly personified by American and British practitioners. A ‘top-down’ (or deductive) approach was developed by the FBI Behavioral Sciences Unit, and involved interviewing convicted offenders, attempting to derive (somewhat subjective) general principles in order to ‘think like a criminal’. By contrast, the British approach (developed principally by David Canter and colleagues) took a much more ‘bottom-up’ (or inductive) approach focused on empirical research, and more precisely quantifiable aspects of criminal behaviour.
Interestingly, the latter approach was ideally suited to standardised analysis methods, and duly spawned a number of computer-based tools. The most prominent among them was a spatial/geographical profiling tool, developed by Canter’s Centre for Investigative Psychology, and named ‘Dragnet’. The idea behind it was relatively simple – that the most likely location of the residence of a perpetrator of a number of similar crimes could be deduced from the locations of the crimes themselves. For example, a burglar doesn’t tend to rob his next-door neighbours, nor does he tend to travel too far from familiar locations to ply his trade – he commits burglaries at a medium distance from home, and generally roughly the same distance. Also general caution might prevent him from returning to the same exact location twice, so an idealised pattern of burglary might include a central point (the perpetrators home) with a number of crime locations forming the points of a circle around it. For an investigator, of course the location of the central point isn’t known a priori, however it can easily be deduced simply by looking at the size and shape of the circle.
In practice of course, it’s never this neat, but modern techniques incorporate various other features (terrain, social geography, etc.) to build statistical models and have met with some success. Ex-police officer Kim Rossmo has been the leading figure in geographic profiling in recent years, and founded the Center for Geospatial Intelligence and Investigation at Texas State university.
Software like this seems like it should be useful, but by and large has failed to deliver on its promises in a major way. At one point it was thought that the future police service would incorporate these tools (and others) routinely in order to solve, and perhaps even predict, crimes. With the sheer amount and richness of data available on the general populace (through online search histories, social networking sites, insurance company/credit card databases, CCTV images, mobile-phone histories, licence-plate-reading traffic cameras, etc. etc.) and on urban environments (e.g. Google maps) that crime-solving software would now be highly developed, and use all these sources of information. However, it seems to have largely stalled in recent years; the Centre for Investigative Psychology’s website has clearly not been updated in several years, and it seems no-one has even bothered producing versions of their software for modern operating systems.
Some others seem to be pursuing similar ideas with more modern methods (e.g. this company), yet still we’re nowhere near any kind of system like the (fictional) one portrayed in the TV series ‘Person of Interest‘, which can predict crimes by analysis of CCTV footage and behaviour patterns derived therefrom. Whether or not this will ever be possible, there is certainly relevant data out there, freely accessible to law-enforcement agencies; the issue is building the right kind of data-mining algorithms to make sense of it all – clearly, not a trivial endeavour.
Something that will undoubtedly help, is the fairly recent development of pretty sophisticated facial recognition technology. Crude face-recognition technology is now embedded in most modern digital cameras, can be used as ID-verification (i.e. instead of a passcode) to unlock smartphones, and is used for ‘tagging’ pictures on websites like Facebook and Flickr. Researchers have been rapidly refining the techniques, including some very impressive methods of generating interpolated high-resolution images from low-quality sources (this paper describes an impressive ‘face hallucination’ method; PDF here). These advancements, while impressive, are essentially a somewhat dry problem in computer vision; there’s no real ‘psychology’ involved here.
One other ‘growth area’ in criminal/legal psychology over the last few years has been in fMRI lie-detection. Two companies (the stupidly-or-maybe-ingeniously-named No Lie MRI, and Cephos) have been aggressively pushing for their lie-detection procedures to be introduced as admissible evidence in US courts. So far they’ve only had minor success, but frankly, it’s only a matter of time. Most serious commentators (e.g. this bunch of imaging heavy-hitters) still strike an extremely cautious tone on such technologies, but they may be fighting a losing battle.
Despite these two very technical areas then, in general, the early promise of a systematic scientific approach to forensic psychology that could be instantiated in formal systems has not been fulfilled. I’m not sure if this is because of a lack of investment, expertise, interest, or just because the problem turned out to be substantively harder to address than people originally supposed. There is an alternative explanation of course – that governments and law enforcement agencies have indeed developed sophisticated software that ties together all the major databases of personal information, integrates it with CCTV and traffic-camera footage, and produces robust models of the behaviour of the general public, both as a whole, and at an individual level. A conspiracy theorist might suppose that if such a system existed, information about it would have to be suppressed, and that’s the likely reason for the apparent lack of development in this area in recent years. Far-fetched? Maybe.
TTFN, and remember – they’re probably (not?) watching you…
A quick post to point you towards a great website with a lot of really cool content (if you’re into that kind of thing, which if you’re reading this blog, then I assume you probably are… anyway, I digress; I apologise, it was my lab’s Christmas party last night and I’m in a somewhat rambling mood. Anyway, back to the point).
So, the website is called cogsci.nl, and is run by a post-doc at the University of Aix-Marseille called Sebastiaan Mathôt. It’s notable in that it’s the homepage of OpenSesame - a very nice-looking, Python-based graphical experiment builder that I’ve mentioned before on these very pages. There’s a lot of other cool stuff on the site though, including more software (featuring a really cool online tool for instantly creating Gabor patch stimuli), a list of links to stimulus sets, and a selection of really-cool optical illusions. Really worth spending 20 minutes of your time poking around a little and seeing what’s there.
I’ll leave you with a video of Sebastiaan demonstrating an experimental program, written in his OpenSesame system, running on a Google Nexus 7 Tablet (using Ubuntu linux as an OS). The future! It’s here!
A quick post to point you to something that looks like a serious case of the funsies. It’s an interactive ebook that’s just been released detailing the adventures of Ned the Neuron – a proper story-book, but with three interactive games built in, all with the aim of teaching kids about basic neuroscience. It’s produced by Kizoom Labs, which was co-founded by Jessica Voytek (one of the developers, along with her husband Brad, of the excellent brainSCANr site).
I wrote a tiny post about PsychoPy a little while ago and it’s something I’ve been meaning to come back to since then. I’ve recently been tasked with an interesting problem; I need an experimental task for a bunch of undergrads to use in a ‘field study’ – something that they can run on their personal laptops, and test people in naturalistic environments (i.e. the participants’ homes). The task is based on a recent paper (Rezlescu et al., 2012) in PLoS One, and involves presenting face stimuli that vary in facial characteristics associated with trustworthiness, in a ‘game’ where the participant plays the role of an investor and has to decide how much money they would invest in each person’s business. I was actually given a version of the experiment programmed (by someone else) in Matlab using the Psychtoolbox system. However, using this version seemed impractical for a number of reasons; firstly Matlab licences are expensive and getting a licenced version of Matlab on every student’s computer would have blown the budget available. Secondly, in my (admittedly, limited) experience with Matlab and Psychtoolbox, I’ve always found it to be a little… sensitive. What I mean is that whenever I’ve tried to transfer a (working) program onto another computer, I’ve generally run into trouble. Either the timing goes to hell, or a different version of Matlab/Psychtoolbox is needed, or (in the worst cases) the program just crashes and needs debugging all over again. I could foresee getting this Matlab code working well on every single students’ laptop would be fraught with issues – some of them might be using OS X, and some might be using various versions of Windows – this is definitely going to cause problems.*
Somewhat counterintuitively therefore, I decided that the easiest thing to do was start from scratch and re-create the experiment using something else entirely. Since PsychoPy is a) entirely free, b) cross-platform (meaning it should work on any OS), and c) something I’d been meaning to look at seriously for a while anyway, it seemed like a good idea to try it out.
I’m happy to report it’s generally worked out pretty well. Despite being a complete novice with PsychoPy, and indeed the Python programming language, I managed to knock something reasonably decent together within a few hours. At times it was frustrating, but that’s always the case when programming experiments (at least, it’s always the case for a pretty rubbish programmer like me, anyway).
So, there are two separate modules to PsychoPy – the ‘Builder’ and the ‘Coder’. Since I’m a complete novice with Python, I steered clear of the Coder view, and pretty much used the Builder, which is a really nice graphical interface where experiments can be built up from modules (or ‘routines’) and flow parameters (i.e. ‘loop through X number of trials’) can be added. Here’s a view of the Builder with the main components labelled (clicky for bigness):
At the bottom is the Flow panel, where you add new routines or loops into your program. The large main Routine panel shows a set of tabs (one for each of your routines) where the events that occur in each of the routines can be defined on a timeline-style layout. At the right is a panel containing a list of stimuli (pictures, videos, random-dot-kinematograms, gratings etc.) and response types (keyboard, mouse, rating scales) that can be added to the routines. Once a stimulus or response is added to a routine, a properties box pops up which allows you to modify basic (e.g. position, size, and colour of text) and some advanced (through the ‘modify everything’ field in some of the dialog boxes) characteristics.
It seems like it would be perfectly possible to build some basic kinds of experiments (e.g. a Stroop task) through the builder without ever having to look at any Python code. However, one of the really powerful features of the Builder interface is the ability to insert custom code snippets (using the ‘code’ component). These can be set to execute at the beginning or end of the experiment, routine, or on every frame. This aspect of the Builder really extends its capabilities and makes it a much more flexible, general-purpose tool. Even though I’m not that familiar with Python syntax, I was fairly easily able to get some if/else functions incorporating random number generation that calculated the amount returned to the investor on a trial, and to use those variables to display post-trial feedback. Clearly a bit of familiarity with the basics of programming logic is important to use these functions though.
This brings me to the Coder view – at any point the ‘Compile Script’ button in the toolbar can be pushed, which opens up the Coder view and displays a script derived from the current Builder view. The experiment can then be run either from the Builder or the Coder. I have to admit, I didn’t quite understand the relationship between the two at first – I was under the impression that these were two views of the same set of underlying data, and changes in either one would be reflected in the other (a bit like the dual-view mode of HTML editors like Dreamweaver) but it turns out that’s not the case, and in fact, once I thought about it, that would be very difficult to implement with a ‘proper’ compiled language like Python. So, a script can be generated from the Builder, and the experiment can then be run from that script, however, changes made to it can not be propagated back to the Builder view. This means that unless you’re a serious Python ninja, you’re probably going to be doing most of the work in the Builder view. The Coder view is really good for debugging and working out how things fit together though – Python is (rightly) regarded as one of the most easily human-readable languages and if you’ve got a bit of experience with almost any other language, you shouldn’t find it too much of a problem to work out what’s going on.
Another nice feature is the ability of the ‘loop’ functions to read in the data it needs for each repeat of the loop (e.g. condition codes, text to be presented, picture filenames, etc.) from a plain text (comma separated) file or Excel sheet. Column headers in the input file become variables in the program and can then be referenced from other components. Data is also saved by default in the same two file formats – .csv and .xls. Finally, the PsychoPy installation comes with a set of nine pre-built demo experiments which range from the basic (Stroop) to more advanced ones (BART) which involve a few custom code elements.
There’s a couple of features that it doesn’t have which I think would be really useful – in particular in the Builder view it would be great if individual components could be copied and pasted between different routines. I found myself adding in a number of text elements and it was a bit laborious to go through them all and change the font, size, position etc. on each one so they were all the same. Of course ‘proper’ programmers working in the Coder view would be able to copy/paste these things very easily…
So, I like PsychoPy; I really do. I liked it even more when I transferred my program (written on a MacBook Air running OS X 10.8) onto a creaky old Windows XP desktop and it ran absolutely perfectly, first time. Amazing! I’m having a little bit of trouble getting it running well on a Windows Vista laptop (the program runs slowly and has some odd-looking artefacts on some of the pictures) but I’m pretty sure that’s an issue with the drivers for the graphics card and can be relatively easily fixed. Of course, Vista sucks, that could be the reason too.
So, I’d recommend PsychoPy to pretty much anybody – the Builder view makes it easy for novices to get started, and the code components and Coder view means it should keep seasoned code-warriors happy too. Plus, the holy trinity of being totally free, open-source, and cross-platform are huge advantages. I will definitely be using it again in future projects, and recommending it to students who want to learn this kind of thing.
Happy experimenting! TTFN.
*I don’t mean to unduly knock Matlab and/or Psychtoolbox – they’re both fantastically powerful and useful for some applications.
I got a serious question for you: What the fuck are you doing? This is not shit for you to be messin’ with. Are you ready to hear something? I want you to see if this sounds familiar: any time you try a decent crime, you got fifty ways you’re gonna fuck up. If you think of twenty-five of them, then you’re a genius… and you ain’t no genius.
Body Heat (1981, Lawrence Kasdan)
To consult the statistician after an experiment is finished is often merely to ask him to conduct a post-mortem examination. He can perhaps say what the experiment died of.
R.A. Fisher (1938)
Doing a pilot run of a new psychology experiment is vital. No matter how well you think you’ve designed and programmed your task, there are (almost) always things that you didn’t think of. Going ahead and spending a lot of time and effort collecting a set of data without running a proper pilot is (potentially) a recipe for disaster. Several times I’ve seen data-sets where there was some subtle issue with the data logging, or the counter-balancing, or something else, which meant that the results were, at best, compromised, and at worst completely useless.
All of the resultant suffering, agony, and sobbing could have been avoided by running a pilot study in the right way. It’s not sufficient to run through the experimental program a couple of times; a comprehensive test of an experiment has to include a test of the analysis as well. This is particularly true of any experiment involving methods like fMRI/MEG/EEG where a poor design can lead to a data-set that’s essentially uninterpretable, or perhaps even un-analysable. You may think you’ve logged all the data variables you think you’ll need for the analysis, and your design is a work of art, but you can’t be absolutely sure unless you actually do a test of the analysis.
This might seem like over-kill, or a waste of effort, however, you’re going to have to design your analysis at some point anyway, so why not do it at the beginning? Analyse your pilot data in exactly the way you’re planning on analysing your main data, save the details (using SPSS syntax, R code, SPM batch jobs – or whatever you’re using) and when you have your ‘proper’ data set, all you’ll (in theory) have to do is plug it in to your existing analysis setup.
These are the steps I normally go through when getting a new experiment up and running. Not all will be appropriate for all experiments, your mileage may vary etc. etc.
1. Test the stimulus program. Run through it a couple of times yourself, and get a friend/colleague to do it once too, and ask for feedback. Make sure it looks like it’s doing what you think it should be doing.
2. Check the timing of the stimulus program. This is almost always essential for a fMRI experiment, but may not be desperately important for some kinds of behavioural studies. Run through it with a stopwatch (the stopwatch function on your ‘phone is probably accurate enough). If you’re doing any kind of experiment involving rapid presentation of stimuli (visual masking, RSVP paradigms) you’ll want to do some more extensive testing to make sure your stimuli are being presented in the way that you think – this might involve plugging a light-sensitive diode into an oscilloscope, sticking it to your monitor with a bit of blu-tack and measuring the waveforms produced by your stimuli. For fMRI experiments the timing is critical. Even though the Haemodynamic Response Function (HRF) is slow (and somewhat variable) you’re almost always fighting to pull enough signal out of the noise, so why introduce more? A cumulative error of only a few tens of milliseconds per trial can mean that your experiment is a few seconds out by the end of a 10 minute scan – this means that your model regressors will be way off – and your results will likely suck.*
3. Look at the behavioural data files. I don’t mean do the analysis (yet), I mean just look at the data. First make sure all the variables you want logged are actually there, then dump it into Excel and get busy with the sort function. For instance, if you have 40 trials and 20 stimuli (each presented twice) make sure that each one really is being presented twice, and not some of them once, and some of them three times; sorting by the stimulus ID should make it instantly clear what’s going on. Make sure the correct responses and any errors are being logged correctly. Make sure the counter-balancing is working correctly by sorting on appropriate variables.
4. Do the analysis. Really do it. You’re obviously not looking for any significant results from the data, you’re just trying to validate your analysis pipeline and make sure you have all the information you need to do the stats. For fMRI experiments – look at your design matrix to see that it makes sense and that you’re not getting warnings about non-orthogonality of the regressors from the software. For fMRI data using visual stimuli, you could look at some basic effects (i.e. all stimuli > baseline) to make sure you get activity in the visual cortex. Button-pushing responses should also be visible as activity in the motor cortex in a single subject too – these kinds of sanity checks can be a good indicator of data quality. If you really want to be punctilious, bang it through a quick ICA routine and see if you get a) component(s) that look stimulus-related, b) something that looks like the default-mode network, and c) any suspiciously nasty-looking noise components (a and b = good, c = bad, obviously).
5. After all that, the rest is easy. Collect your proper set of data, analyse it using the routines you developed in point 4. above, write it up, and then send it to Nature.
And that, ladeez and gennulmen, is how to do it. Doing a proper pilot can only save you time and stress in the long run, and you can go ahead with your experiment in the certain knowledge that you’ve done everything in your power to make sure your data is as good as it can possibly be. Of course, it still might be total and utter crap, but that’ll probably be your participants’ fault, not yours.
Happy piloting! TTFN.
*Making sure your responses are being logged with a reasonable level of accuracy is also pretty important for many experiments, although this is a little harder to objectively verify. Hopefully if you’re using some reasonably well-validated piece of software and decent response device you shouldn’t have too many problems.