Category Archives: Commentary

JASP might finally be the SPSS replacement we’ve been waiting for

i-don-t-always-make-shity-tables-and-figures-from-my-data-but-whI use SPSS for statistical analysis, but I don’t like it. Every time I do, I feel like the victim in some kind of emotionally abusive relationship. The interface is deeply horrid, the outputs are butt-ugly, and it runs like a three-legged overweight sloth with a heavy suitcase. It’s an absolute bloated dog of an application, and IBM clearly don’t give a crap about it, other than making some cosmetic updates every now and again. Plus the licensing system is bat-shit insane, and very expensive.

So, why do I keep using it? Because a) It’s what I learned as an undergraduate/PhD student and I know it backwards, and b) there are few viable alternatives. Yes, I know I should learn R, but I actually don’t use ‘normal’ stats that often (I spend most of my analysis time in brain-imaging packages these days) and every time I learn how to do something in R, I try doing it again a month later, have forgotten it, and have to learn it all over again. At some point I hope to become an R master, but for occasional use, I find the learning curve to be too steep. I would also hesitate to try and use R to teach students; I find it generally pretty user-hostile.

So, for ages now, I’ve been looking for a good, user-friendly, open-source alternative to SPSS. One that isn’t a bloated monster, but has enough features to enable basic analyses. I was quite hopeful about PSPP for a while (free software that tries to replicate SPSS as closely as possible). However it lacks some relatively basic ANOVA features, and since one of the things I dislike about SPSS is the interface, trying to replicate it seems like a bit of a mistake. SOFA statistics was a contender too, and it does have a beautiful interface and produce very nice-looking results, but it only does one-way ANOVAS, so… fail.

So, I gave up and crawled miserably back to SPSS. However, fresh hope now burns within my chest, as the other day I came across JASP (which the developers insist, definitely does not stand for ‘Just Another Statistics Program’). The aim of JASP is to be ‘a low fat alternative to SPSS, a delicious alternative to R.’ Nice. It seems to cover all the analysis essentials (t-test, ANOVA, regression, correlation) plus also has some fancier Bayesian alternatives and a basic Structural Equation Modelling option. The interface is great, and the results tables update in real-time as you change the options in your analysis! Very nice. This demo video gives a good overview of the features and workflow:

It’s clearly very much a work-in-progress. One issue is that it doesn’t have any in-built tools for data manipulation. It will read .csv text files, but they basically have to be in a totally ready-to-analyse format, which means general data-cleaning/munging procedures have to be done in Excel/Matlab/R/whatever. Another major downside is that there appears to be no facility for saving or scripting analysis pipelines. Hopefully though, development will continue and other features will gradually appear… I’ll be keeping a close eye on it!

A new paper on timing errors in experimental software

a.aaaA new paper just out in PLoS One (thanks to Neuroskeptic for pointing it out on Twitter) shows the results of some tests conducted on three common pieces of software used in psychology and neuroscience research: DMDX, E-Prime, and PsychoPy. The paper is open-access, and you can read it here. The aim was to test the timing accuracy of the software when presenting simple visual stimuli (alternating black and white screens). As I’ve written about before, accurate timing in experiments can often be of great importance and is by no means guaranteed, so it’s good to see some objective work on this, conducted using modern hardware and software.

The authors followed a pretty standard procedure for this kind of thing, which is to use a piece of external recording equipment (in this case the Black Box Tool Kit) connected to a photo-diode. The photodiode is placed on the screen of the computer being tested and detects every black-to-white or white-to-black transition, and the data is logged by the BBTK device. This provides objective information about when exactly the visual stimulus was displayed on the screen (as opposed to when the software running the display thinks it was displayed on the screen, which is not necessarily the same thing). In this paper the authors tested this flickering black/white stimulus at various speeds from 1000ms, down to 16.6667ms (one screen refresh or ‘tick’ on a standard 60Hz monitor).

It’s a nice little paper and I would urge anyone interested to read it in full; the introduction has a really nice review of some of the issues involved, particularly about different display hardware.  At first glance the results are a little disappointing though, particularly if (like me) you’re a fan of PsychoPy. All three bits of software were highly accurate at the slower black/white switching times, but as the stimulus got faster (and was therefore more demanding on the hardware) errors began to creep in. At times less than 100ms, errors (in the form of dropped, or added frames) begin to creep in to E-Prime and PsychoPy; DMDX on the other hand just keeps on truckin’, and is highly accurate even at the fastest-switch conditions. PsychoPy is particularly poor under the most demanding conditions, with only about a third of ‘trials’ being presented ‘correctly’ under the fastest condition.

Why does this happen? The authors suggest that DMDX is so accurate because it uses Microsoft’s DirectX graphics libraries, which are highly optimised for accurate performance on Windows. Likewise, E-Prime uses other features of Windows to optimise its timing. PsychoPy on the other hand is platform-agnostic (it will run natively on Windows, Mac OS X, and various flavours of Linux) and therefore uses a fairly high-level language (Python). In simple terms, PsychoPy can’t get quite as close to the hardware as the others, because it’s designed to work on any operating system; there are more layers of software abstraction between a PsychoPy script and the hardware.

Is this a problem? Yes, and no. Because of the way that the PsychoPy ‘coder’ interface is designed, advanced users who require highly accurate timing have the opportunity to optimise their code, based on the hardware that they happen to be using. There’s no reason why a  Python script couldn’t take advantage of the timing features in Windows that make E-Prime accurate too – they’re just not included in PsychoPy by default, because it’s designed to work on unix-based systems too. For most applications, dropping/adding a couple of frames in a 100ms stimulus presentation is nothing in particular to worry about anyway; certainly not for the applications I mostly use it for (e.g. fMRI experiments where, of course, timing is important in many ways, but the variability in the haemodynamic response function tends to render a lot of experimental precision somewhat moot). The authors of this paper agree, and conclude that all three systems are suitable for the majority of experimental paradigms used in cognitive research. For me, the benefits of PsychoPy (cross-platform, free licensing, user-friendly interface) far outweigh the (potential) compromise in accuracy. I haven’t noticed any timing issues with PsychoPy under general usage, but I’ve never had a need to push it as hard as these authors did for their testing purposes. It’s worth noting that all the testing in this paper was done using a single hardware platform; possibly other hardware would give very different results.

Those who are into doing very accurate experiments with very short display times (e.g. research on sub-conscious priming, or visual psychophysics) tend to use pretty specialised and highly-optimised hardware and software, anyway. If I ever had a need for such accuracy, I’d definitely undertake some extensive testing of the kind that these authors performed, no matter what hardware/software I ended up using. As always, the really important thing is to be aware of the potential issues with your experimental set-up, and do the required testing before collecting your data. Never take anything for granted; careful testing, piloting, examination of log files etc. is a potential life-saver.

TTFN.

 

Towards open-source psychology research

uncle-sam-open-source-311x400A couple of interesting things have come along recently which have got me thinking about the ways in which research is conducted, and how software is used in psychology research.

The first is some recent publicity around the Many Labs replication project  – a fantastic effort to try and perform replications of some key psychological effects, with large samples, and in labs spread around the world. Ed Yong has written a really great piece on it here for those who are interested. The Many Labs project is part of the Open Science Framework – a free service for archiving and sharing research materials (data, experimental designs, papers, whatever).

The second was a recent paper by Tom Stafford and Mike Dewar in Psychological Science. This is a really impressive piece of research from a very large sample of participants (854,064!) who played an online game. Data from the game was analysed to provide metrics of perception, attention and motor skills, and to see how these improved with training (i.e. more time spent playing the game). The original paper is here (paywalled, unfortunately), but Tom has also written about it on the Mind Hacks site and on his academic blog. The latter piece is interesting (for me anyway) as Tom says that he found his normal approach to analysis just wouldn’t work with this large a dataset and he was obliged to learn Python in order to analyse the data. Python FTW!

Anyway, the other really nice thing about this piece of work is that the authors have made all the data, and the code used to analyse it, publicly available in a GitHub repository here. This is a great thing to do, particularly for a large, probably very rich dataset like this – potentially there are a lot of other analyses that could be run on these data, and making it available enables other researchers to use it.

These two things crystallised an important realisation for me: It’s now possible, and even I would argue preferential, for the majority of not-particularly-technically-minded psychology researchers to perform their research in a completely open manner. Solid, free, user-friendly cross-platform software now exists to facilitate pretty much every stage of the research process, from conception to analysis.

Some examples: PsychoPy is (in my opinion) one of the best pieces of experiment-building software around at the moment, and it’s completely free, cross-platform, and open-source. The R language for statistical computing is getting to be extremely popular, and is likewise free, cross-platform, etc. For analysis of neuroimaging studies, there are several open-source options, including FSL and NiPype. It’s not hard to envision a scenario where researchers who use these kinds of tools could upload all their experimental files (experimental stimulus programs, resulting data files, and analysis code) to GitHub or a similar service. This would enable anyone else in the world who had suitable (now utterly ubiquitous) hardware to perform a near-as-dammit exact replication of the experiment, or (more likely) tweak the experiment in an interesting way (with minimal effort) in order to run their own version. This could potentially really help accelerate the pace of research, and the issue of poorly-described and ambiguous methods in papers would become a thing of the past, as anyone who was interested could simply download and demo the experiment themselves in order to understand what was done. There are some issues with uploading very large datasets (e.g. fMRI or MEG data) but initiatives are springing up, and the problem seems like it should be a very tractable one.

The benefit for researchers should hopefully be greater visibility and awareness of their work (indexed in whatever manner; citations, downloads, page-views etc.). Clearly some researchers (like the authors of the above-mentioned paper) have taken the initiative and are already doing this kind of thing. They should be applauded for taking the lead, but they’ll likely remain a minority unless researchers can be persuaded that this is a good idea. One obvious prod would be if journals started encouraging this kind of open sharing of data and code in order to accept papers for publication.

One of the general tenets of the open-source movement (that open software benefits everyone, including the developers) is doubly true of open science. I look forward to a time when the majority of research code, data, and results are made public in this way and the research community as a whole can benefit from it.

A collection of links on academic email etiquette

Don't email Brian and call him 'Mr Cox'. It makes him have a bad hair day. :o(

Don’t email Brian and call him ‘Mr Cox’. It gives him a bad hair day.

When I was an undergraduate student, email was still not widely used, and the idea of emailing a lecturer or professor would have been quite daunting. Times have changed however, and nowadays most academics deal with a steady stream of emails from students throughout the year. This is a good thing in many ways; it helps to break down barriers between the staff and students and can be a very efficient way to communicate. Unfortunately many students don’t follow some basic rules of general politeness when contacting staff and this leads to faculty members getting irritated, and students receiving witheringly sarcastic responses or links to let me Google that for you.

Here are a few pieces I’ve collected that set out precisely how best to communicate with your advisor, lecturer or professor. First of all, we have a guide from Wellesley College titled How to Email Your Professor, shared on Twitter by Tom Hartley. Tom also went to the bother of conducting a survey about this kind of thing, and presented the results on his blog. He highlights some interesting cultural differences, particularly between the UK and the US – well worth reading through.

Next, is another set of guidelines from Akira O’Connor, also with some interesting contributions from others in the comments.

Last is a really terrific set of slides by Cedar Riener, which you can find here. He also provides a sample ‘How to miss a class’ e-mail, with added annotations here. Brilliant.

I won’t bother repeating much of what these excellent sources suggest, except to say that the common threads through them all seem to be:

1) Be polite, and relatively formal (at least at first).
2) Don’t ask stupid questions.
3) Don’t make stupid (i.e. any) spelling and grammar mistakes.
4) For the love of all that is good and holy, get the name and title of the person you’re emailing correct.

How hard can that be, eh?

Back to school special

Funny-Back-to-School-Sign

Unimatrix-0 High School has really excellent attendance and discipline statistics

So, another academic year is about to hove into view over the horizon, and what better time to take stock of your situation, make sure your gear is fit for purpose, and think about levelling-up your geek skills to cope with the rigours of the next year of academic life. If you need any hardware, Engadget’s Back to School review guides are a great place to start, and have reviews of all kinds of things from smartphones to gaming systems, all arranged helpfully in several price categories.

If you really want to be ahead of the game this year though, you’ll need to put in a bit of extra time and effort, and learn some new skills. Here are my recommendations for what computing skills psychology students should be learning, for each year of a standard UK BSc in Psychology.*

If you’re starting your 1st year…

A big part of the first year is about learning basic skills like academic writing, synthesising information, referencing etc. Take a look at my computer skills checklist for psychology students and see how you measure up. Then, the first thing you need to do, on day one, is start using a reference manager. This is an application that will help you organise journal articles and other important sources for your whole degree, and will even do your essay referencing for you. I like Mendeley, but Zotero is really good as well. Both are totally free. Download one of them right now. This is honestly the best bit of advice I can possibly give to any student. Do it. I just can’t emphasise this enough. Really. OK. Moving on.

Next you need to register for a Google account, if you don’t have one already. Here’s why. Then use your new Google username to sign up for Feedly and start following some psychology and neuroscience blogs. Here and here are some good lists to get you started. If you’re a real social-media fiend, sign up for Twitter and start following some of these people.

You may want to use the 5GB of free storage you get with Google Drive as a cloud back-up space for important documents, or you may want to sign up for a Dropbox account as well. Use one or the other, or preferably both, because none of your data is safe. Ever.

You’ll want to start getting to know how to use online literature databases. Google Scholar is a good start, but you’ll also need to get familiar with PubMed, PsycInfo and Web of Knowledge too.

If you’re really keen and want to learn some general skills that will likely help you out in the future, learn how to create a website with WordPress or Github Pages.  Or maybe download Gimp and get busy with learning some picture editing.

If you’re starting your 2nd year…

This is when things get more serious and you probably can’t expect to turn up to tutorials with an epically massive hangover and still understand everything that’s going on. Similarly, you need to step it up a level with the geekery as well.

You probably learned some SPSS in your statistics course in the first year. That’s fine, but you probably don’t have a licence that allows you to play with it on your own computer. PSPP is the answer – it’s a free application that’s made to look and work just like SPSS – it even runs SPSS syntax code. Awesomes. Speaking of which, if you’re not using the syntax capabilities of SPSS and doing it all through the GUI, you’re doing it wrong. 

If you really want to impress, you’ll start using R for your lab reports. The seriously hardcore will just use the base R package, but don’t feel bad if you want to use R-Commander or Deducer to make life a bit easier. Start with the tutorials here.

If you’re starting your 3rd year…

This is the year when you’ll probably have to do either a dissertation, a research project, or maybe both. If you’re not using a reference manager already, trying to do a dissertation without one is utter lunacy – start now.

For your research project, try and do as much of it as you can yourself. If you’re doing some kind of survey project, think about doing it online using Google Forms, or LimeSurvey. If you’re doing a computer-based task, then try and program it yourself using PsychoPy. Nothing will impress your project supervisor more than if you volunteer to do the task/survey set-up yourself. Then of course you can analyse the data using the mad statz skillz you learned in your second year. Make some pretty looking figures for your final report using  the free, open-source Veusz.

Learning this stuff might all sound like a lot to ask when you also have essays to write, tutorials to prepare for, and parties to attend. However, all these things are really valuable CV-boosting skills which might come to be invaluable after you graduate. If you want to continue studying at Masters or PhD level, potential supervisors will be looking for applicants with these kinds of skills, and solid computer knowledge can also help to distinguish you from all the other psychology graduates when applying for ‘normal’ jobs too. It really is the best thing you can learn, aside from your course material, naturally.

Have I missed anything important? Let me know in the comments!

Good luck!

* I realise US colleges and other countries have a different structure, but I think these recommendations will still broadly apply.

Comment on the Button et al. (2013) neuroscience ‘power-failure’ article in NRN

Statistical Spidey knows the score.

Statistical Spidey knows the score.

An article was published in Nature Reviews Neuroscience yesterday which caused a bit of a stir among neuroscientists (or at least among neuroscientists on Twitter, anyway). The authors cleverly used meta-analytic papers to estimate the ‘true’ power of an effect, and then (using the G*Power software) calculated the power for each individual study that made up the meta-analysis, based on the sample size of each one. Their conclusions are pretty damning for the field as a whole: an overall value of 21%, dropping to 8% in some sub-fields. This means that out of 100 studies that are conducted into a genuine effect, only 21 will actually demonstrate it.

The article has been discussed and summarised at length by Ed Yong, Christian Jarrett, and by Kate Button (the study’s first author) on Suzy Gage’s Guardian blog, so I’m not going to re-hash it any more here. The original paper is actually very accessible and well-written, and I encourage interested readers to start there. It’s definitely an important contribution to the debate, however (as always) there are alternative perspectives. I generally have a problem with over-reliance on power analyses (they’re often required for grant applications, and other project proposals). Prospective power analyses (i.e. those conducted before a piece of research is conducted, in order to tell you how many subjects you need) use an estimate of the effect size you expect to achieve – usually derived from previous work that has examined a (broadly) similar problem using (broadly) similar methods. This estimate is essentially a wild shot in the dark (especially because of some of the issues and biases discussed by Button et al., that are likely to operate in the literature), and the resulting power analysis therefore tells you (in my opinion) nothing very useful. Button et al. get around this issue by using the effect size from meta-analyses to estimate the ‘true’ effect size in a given literature area – a neat trick.

The remainder of this post deals with power-issues in fMRI, since it’s my area of expertise, and necessarily gets a bit technical. Readers who don’t have a somewhat nerdy interest in fMRI-methods are advised to check out some of the more accessible summaries linked to above. Braver readers – press on!

An alternative approach used in the fMRI field, and one that I’ve been following when planning projects for years, is a more empirical method. Murphy and Garavan (2004) took a large sample of 58 subjects who had completed a Go/No-Go task and analysed sub-sets of different sizes to look at the reproducibility of the results, with different sample sizes. They showed that reproducibility (assessed by correlation of the statistical maps with the ‘gold standard’ of the entire dataset; Fig. 4) reaches 80% at about 24 or 25 subjects. By this criterion, many fMRI studies are underpowered.

While I like this empirical approach to the issue, there are of course caveats and other things to consider. fMRI is a complex, highly technical research area, and heavily influenced by the advance of technology. MRI scanners have significantly improved in the last ten years, with 32 or even 64-channel head-coils becoming common, faster gradient switching, shorter TRs, higher field strength, and better field/data stability all meaning that the signal-to-noise has improved considerably. This serves to cut down one source of noise in fMRI data – intra-subject variance. The inter-subject variance of course remains the same as it always was, but that’s something that can’t really be mitigated against, and may even be of interest in some (between-group) studies. On the analysis side, new multivariate methods are much more sensitive to detecting differences than the standard mass-univariate approach. This improvement in effective SNR means that the Murphy and Garavan (2004) estimate of 25 subjects for 80% reproducibility may be somewhat inflated, and with modern techniques one could perhaps get away with less.

The other issue with the Murphy and Garavan (2004) approach is that it’s not very generalisable. The Go/No-Go task is widely used and is a ‘standard’ cognitive/attentional task that activates a well-described brain network, but other tasks may produce more or less activation, in different brain regions. Signal-to-noise varies widely across the brain, and across task-paradigms, with simple visual or motor experiments producing very large signal changes and complex cognitive tasks smaller ones. Yet another factor is the experimental design (blocked stimuli, or event-related),  the overall number of trials/stimuli presented, and the total scanning time for each subject, all of which can vary widely.

The upshot is that there are no easy answers, and this is something I try to impress upon people at every opportunity; particularly the statisticians who read my project proposals and object to me not including power analyses. I think prospective power analyses are not only uninformative, but give a false sense of security, and for that reason should be treated with caution. Ultimately the decision about how many subjects to test is generally highly influenced by other factors anyway (most notably, time, and money). You should test as many subjects as you reasonably can, and regard power analysis results as, at best, a rough guide.

Digital audio and video basics – two excellent videos

I’ve just come across two outstanding tutorial videos over on xiph.org – an open-source organisation dedicated to developing multimedia protocols and tools. So, the first one covers the fundamental principles of digital sampling for audio and video, and discusses sampling rates, bit depth and lots of other fun stuff – if you’ve ever wondered what a 16-bit, 128kbps mp3 is, this is for you.

The second one focusses on audio and gets on to some more advanced topics, about how audio behaves in the real world.

They’re both fairly long (30 mins and 23 mins respectively) but well worth watching. If you’re just getting started with digital audio and/or video editing and production, these could be really useful.

TTFN.

Why brain training is (probably) pernicious hogwash

brain-training-exercises

The only treadmill your brain should be on is a hedonic one.

So-called brain-training tools seem to have exploded in the last few years; one estimate puts it at a $6 billion market by 2020. It’s clearly become a major industry, but what’s less clear is exactly what it does, and if it even works. The typical procedure seems to be to engage in short games, puzzles and working-memory-type tasks, and these are supposed to produce long term changes in attention, engagement and general fluid intelligence.

Whether this is actually true or not is a matter of some debate. I’m not a specialist in this area, but the received wisdom appears to be that training on specific tasks does improve performance – on those tasks. There seems to be little generalisation to other tasks, and even less to domain-general abilities like executive processing, or working memory. A high-profile study by Adrian Owen and colleagues (2010) reported exactly that – benefits in the tasks themselves, but little (if any) general benefits. A previous study from PNAS in 2008 does seem to contradict this, and reports an increase in fluid intelligence as a result of working-memory training – not only that, but they claim a dose-dependent effect, that is, more training = more increase in intelligence. The gains in that study were relatively small, and it should be also noted that the control group also apparently increased their intelligence somewhat over the same period as the experimental group – curious. There are lots of other studies around, but many have issues; small samples, poorly-controlled etc. etc.

So, the jury’s still very much out (though personally, I’m on the side of the skeptics on the issue). This hasn’t stopped a bewildering array of businesses starting up, making all kinds of wild claims, and playing on the fears of educators and parents that perhaps if they don’t provide these kinds of programs, their kids will be slipping behind the rest. All these companies have glossy, highly-polished, ethnically-balanced websites with testimonials, and lots of links to science-y looking videos that present their program as the only scientifically-proven method of increasing your child’s intelligence. A brief browse through some of these companies websites reveals that they range from the absurd (QDreams! Success at the speed of thought!) to the very, very slick indeed (e.g. Lumosity). Other examples are Cogmed (seems to be backed by Pearson publishers and, to its credit, links to a list of semi-relevant research papers), and the very simplistic PowerBrain Education – which seems to involve getting kids to do some odd-looking arm-shaking exercises. There’s literally hundreds of these companies. Some of them even seem to cater to businesses who want their employees to do these ‘exercises’.

LearningRX definitely falls into the slick category. According to this New York Times article it has 83 physical store-front franchises across the USA, where people can come to pay $80-90 an hour for one-on-one training, and they market this to parents as an alternative to traditional tutoring. A quick glance at their Scientific Advisory Board is pretty revealing – I count only one (clinical) psychologist, and a grab-bag of other professionals – mostly teachers (qualified to Masters level) with an optometrist, a chemical engineer and an audiologist. Not a single neuroscientist, and only a few qualified at doctorate level.

I’m not trying to be unnecessarily snobby about their qualifications here, I’m suggesting that the claims they make for their brain-training programs (literally: it will change your child’s life) are big ones, and we might expect that the people who developed it might be qualified in some area of brain-science. If it really, clearly worked, then of course it wouldn’t matter exactly who developed it, and what their qualifications were, but  there’s definitely reasonable doubt (if not outright disbelief) over its effectiveness.

And this is the important point. People are spending money on this – big money. Whether that’s a hard-pressed family struggling to find an extra $90 a week for their kid to have a session at one of LearningRX’s centres, or an education board deciding to institute one of these programs in its schools. Education budgets are tight enough, but these kinds of programs are being heavily invested in, and I can see why – they promise to make kids smarter, better-behaved, more attentive, and all you have to do is sit them in front of a special computer game for an hour a week. That must seem like a pretty attractive proposition for teachers. Unfortunately, if they really don’t work, then that money could be better spent on books, or musical instruments, or something else which might genuinely enrich the kids’ lives.

There’s a long and venerable history of unscrupulous people making money from pseudo-neuroscience – back in the 19th Century phrenology was described as “The science of picking someone’s pocket, through their skull.” I’d like to believe that some of these companies have a solid product that actually made a difference, but they all seem to have the whiff of snake-oil about them. For now I’m very much of the opinion that you’d probably be better off learning the piano, or Japanese, or even playing the latest Call of Duty. If you were really ambitious you could even try and get your kid to (Heaven forfend!) read the odd book now and again.

TTFN.

 

**Update 07/02/13**

I put that last sentence that mentions Call of Duty in there as a bit of flippancy, but I’ve since been informed (by Micah Allen on Twitter) of some evidence that playing action video games can indeed improve some cognitive processes such as the accuracy of visuo-spatial attention and reaction times. These results mostly originate from a single lab and so are in need of replication, but still – interesting. (I still reckon you’re probably better off with a good book though.)

Whither forensic psychology software?

Good nutrition's given you some length of bone.

Good nutrition’s given you some length of bone.

Forensic and criminal psychology are somewhat odd disciplines; they sit at the cross-roads between abnormal psychology, law, criminology, and sociology.  Students seem to love forensic psychology courses, and the number of books, movies, and TV shows which feature psychologists cooperating with police (usually in some kind of offender-profiling manner) attests to the fascination that  the general public have for it too. Within hours of the Newtown, CT shooting spree last December, ‘expert’ psychologists were being recruited by the news media to deliver soundbites attesting to the probable mental state of the perpetrator. Whether this kind of armchair diagnosis is appropriate or useful (hint: it’s really not), it’s a testament to the acceptance of such ideas within society at large.

Back in the late 80s and early 90s there were two opposing approaches to offender profiling, rather neatly personified by American and British practitioners. A ‘top-down’ (or deductive) approach was developed by the FBI Behavioral Sciences Unit, and involved interviewing convicted offenders, attempting to derive (somewhat subjective) general principles in order to ‘think like a criminal’. By contrast, the British approach (developed principally by David Canter and colleagues) took a much more ‘bottom-up’ (or inductive) approach focused on empirical research, and more precisely quantifiable aspects of criminal behaviour.

Interestingly, the latter approach was ideally suited to standardised analysis methods, and duly spawned a number of computer-based tools. The most prominent among them was a spatial/geographical profiling tool, developed by Canter’s Centre for Investigative Psychology, and named ‘Dragnet’. The idea behind it was relatively simple – that the most likely location of the residence of a perpetrator of a number of similar crimes could be deduced from the locations of the crimes themselves. For example, a burglar doesn’t tend to rob his next-door neighbours, nor does he tend to travel too far from familiar locations to ply his trade – he commits burglaries at a medium distance from home, and generally roughly the same distance. Also general caution might prevent him from returning to the same exact location twice, so an idealised pattern of burglary might include a central point (the perpetrators home) with a number of crime locations forming the points of a circle around it. For an investigator, of course the location of the central point isn’t known a priori, however it can easily be deduced simply by looking at the size and shape of the circle.

geographicprofiling

In practice of course, it’s never this neat, but  modern techniques incorporate various other features (terrain, social geography, etc.) to build statistical models and have met with some success. Ex-police officer Kim Rossmo has been the leading figure in geographic profiling in recent years, and founded the Center for Geospatial Intelligence and Investigation at Texas State university.

Software like this seems like it should be useful, but by and large has failed to deliver on its promises in a major way. At one point it was thought that the future police service would incorporate these tools (and others) routinely in order to solve, and perhaps even predict, crimes. With the sheer amount and richness of data available on the general populace (through online search histories, social networking sites, insurance company/credit card databases, CCTV images, mobile-phone histories, licence-plate-reading traffic cameras, etc. etc.) and on urban environments (e.g. Google maps) that crime-solving software would now be highly developed, and use all these sources of information. However, it seems to have largely stalled in recent years; the Centre for Investigative Psychology’s website has clearly not been updated in several years, and it seems no-one has even bothered producing versions of their software for modern operating systems.

Some others seem to be pursuing similar ideas with more modern methods (e.g. this company), yet still we’re nowhere near any kind of system like the (fictional) one portrayed in the TV series ‘Person of Interest‘, which can predict crimes by analysis of CCTV footage and behaviour patterns derived therefrom. Whether or not this will ever be possible, there is certainly relevant data out there, freely accessible to law-enforcement agencies; the issue is building the right kind of data-mining algorithms to make sense of it all – clearly, not a trivial endeavour.

Something that will undoubtedly help, is the fairly recent development of pretty sophisticated facial recognition technology. Crude face-recognition technology is now embedded in most modern digital cameras, can be used as ID-verification (i.e. instead of a passcode) to unlock smartphones, and  is used for ‘tagging’ pictures on websites like Facebook and Flickr. Researchers have been rapidly refining the techniques, including some very impressive methods of generating interpolated high-resolution images from low-quality sources (this paper describes an impressive ‘face hallucination’ method; PDF here). These advancements, while impressive, are essentially a somewhat dry problem in computer vision; there’s no real ‘psychology’ involved here.

'Face hallucination' -  Creating high quality face images from low-resolution inputs, by using algorithms with prior information about typical facial features.

‘Face hallucination’ – Creating high quality face images from low-resolution inputs, by using algorithms with prior information about typical facial features.

One other ‘growth area’ in criminal/legal psychology over the last few years has been in fMRI lie-detection. Two companies (the stupidly-or-maybe-ingeniously-named No Lie MRI, and Cephos) have been aggressively pushing for their lie-detection procedures to be introduced as admissible evidence in US courts. So far they’ve only had minor success, but frankly, it’s only a matter of time. Most serious commentators (e.g. this bunch of imaging heavy-hitters) still strike an extremely cautious tone on such technologies, but they may be fighting a losing battle.

Despite these two very technical areas then, in general, the early promise of a systematic scientific approach to forensic psychology that could be instantiated in formal systems has not been fulfilled. I’m not sure if this is because of a lack of investment, expertise, interest, or just because the problem turned out to be substantively harder to address than people originally supposed. There is an alternative explanation of course – that governments and law enforcement agencies have indeed developed sophisticated software that ties together all the major databases of personal information, integrates it with CCTV and traffic-camera footage, and produces robust models of the behaviour of the general public, both as a whole, and at an individual level. A conspiracy theorist might suppose that if such a system existed, information about it would have to be suppressed, and that’s the likely reason for the apparent lack of development in this area in recent years. Far-fetched? Maybe.

TTFN, and remember – they’re probably (not?) watching you…

 

Links page update

Just posted a fairly major update to my links page, including new sections on Neuropsychological/Cognitive testing, Neuromarketing/research businesses, and Academic conferences and organisations, plus lots of other links added to the existing sections, and occasional sprinkles of extra-bonus-added sarcasm throughout. Yay! Have fun people.