Blog Archives

Website of the week: Cogsci.nl. OpenSesame, illusions, online experiments, and more.

A quick post to point you towards a great website with a lot of really cool content (if you’re into that kind of thing, which if you’re reading this blog, then I assume you probably are… anyway, I digress; I apologise, it was my lab’s Christmas party last night and I’m in a somewhat rambling mood. Anyway, back to the point).

So, the website is called cogsci.nl, and is run by a post-doc at the University of Aix-Marseille called  Sebastiaan Mathôt. It’s notable in that it’s the homepage of OpenSesame –  a very nice-looking, Python-based graphical experiment builder that I’ve mentioned before on these very pages. There’s a lot of other cool stuff on the site though, including more software (featuring a really cool online tool for instantly creating Gabor patch stimuli), a list of links to stimulus sets, and a selection of really-cool optical illusions. Really worth spending 20 minutes of your time poking around a little and seeing what’s there.

I’ll leave you with a video of Sebastiaan demonstrating an experimental program, written in his OpenSesame system, running on a Google Nexus 7 Tablet (using Ubuntu linux as an OS). The future! It’s here!

Advertisements

How to analyse reaction time (RT) data: Part 1

downloadReaction time tasks have been a mainstay of psychology since the technology to accurately time and record such responses became widely available in the 70s. RT tasks have been applied in a bewildering array of research areas and (when used properly) can provide information about memory, attention, emotion and even social behaviour.

This post will focus on the best way to handle such data, which is perhaps not as straightforward as might be assumed. Despite the title, I’m not really going to cover the actual analysis; there’s a lot of literature already out there about what particular statistical tests to use, and in any case, general advice of that kind is not much use as it depends largely on your experimental design. What I’m intending to focus on are the techniques the stats books don’t normally cover – data cleaning, formatting and transformation techniques which are essential to know about if you’re going to get the best out of your data-set.

For the purposes of this discussion I’ll use a simple made-up data-set, like this:

RT1

This table is formatted in the way that a lot of common psychology software (i.e. PsychoPy, Inquisit, E-Prime) records response data. From left-to-right, you can see we have three participants’ data here (1, 2, and 3 in column A), four trials for each subject (column B), two experimental conditions (C; presented in a random order), and then the actual reaction times (column D) and then a final column which codes whether the response was correct or not (1=correct, 0= error).

I created the data table using Microsoft Excel, and will do the processing with it too, however I really want to stress that Excel is definitely not the best way of doing this. It suits the present purpose because I’m doing this ‘by hand’ for the purposes of illustration. With a real data-set which might be thousands of lines long, these procedures would be much more easily accomplished by using the functions in your statistics weapon-of-choice (SPSS, R, Matlab, whatever). Needless to say, if you regularly have to deal with RT data it’s well worth putting the time into writing some general-purpose code which can be tweaked and re-used for subsequent data sets.

The procedures we’re going to follow with these data are:

  1. Remove reaction times on error trials
  2. Do some basic data-cleaning (removal of outlying data)
  3. Re-format the data for analysis

1. Remove reaction times on error trials

As a general rule, reaction times from trials on which the participant made an error should not be used in subsequent analysis. The exceptions to this rule are some particular tasks where the error trials might be of particular interest (Go/No-Go tasks, and some others). Generally though, RTs from error trials are thought to be unreliable, since there’s an additional component process operating on error trials (i.e. whatever it was that produced the error). The easiest way of accomplishing this is to insert an additional column, and code all trials with errors as ‘0’, and all trials without an error as the original reaction time. This can be a simple IF/ELSE statement of the form:

IF (error=1) RT=RT,
ELSE RT=0

In this excel-based illustration I entered the formula: =IF(E2=1, D2,0) in cell F2, and then copied it down the rest of the column to apply to all the subsequent rows. Here’s the new data sheet:

RT2

2. Data-cleaning – Removal of outlying data

The whole topic of removing outliers from reaction time data is a fairly involved one, and difficult to illustrate with the simple example I’m using here. However, It’s a very important procedure, and something I’m going to return to in a later post, using a ‘real’ data-set. From a theoretical perspective, it’s usually desirable to remove both short and long outliers. Most people cannot push a button in response to, say, a visual stimulus in less than about 300ms, so it can be safely assumed that short RTs of, say, less than 250ms were probably initiated before the stimulus; that is, they were anticipatory. Long outliers are somewhat trickier conceptually – some tasks that involve a lot of effortful cognitive processing before a response (say a task involving doing difficult arithmetic) might have reaction times of several seconds, or even longer. However, very broadly, the mean RT for most ‘simple’ tasks tends to be around 400-700ms; this means that RTs longer than say, 1000ms might reflect some other kind of process. For instance, it might reflect the fact that the participant was bored, became distracted, temporarily forgot which button to push, etc. For these reasons then, it’s generally thought to be desirable to remove outlying reaction times from further analysis.

One (fairly simple-minded, but definitely valid) approach to removing outliers then, is to simply remove all values that fall below 250ms, or above 1000ms. This is what I’ve done in the example data-sheet in columns G and H, using simple IF statements of a similar form used for removal of the error trials:

RT3

You can see that two short RTs and one long one have been removed and recoded as 0.

3.  Re-format the data for analysis

The structure that most psychology experimental systems use for their data logging (similar to the one we’ve been using as an illustration) is not really appropriate for direct import into standard stats packages like SPSS. SPSS requires that one row on the data sheet is used for each participant, whereas we have one row-per-trial. In order to get our data in the right format we first need to sort the data, first by subject (column A), and then by condition (column C). Doing this sort procedure ensures that we know which entries in the final column are which – the first two rows of each subject’s data are always condition 1, and the second two are always condition 2:

RT4

We can then restructure the data from the final column, like so:

RT5

I’ve done this ‘by hand’ in Excel by cutting-and-pasting the values for each subject into a new sheet and using the paste-special > transpose function, however this is a stupid way of doing it – the ‘restructure’ functions in SPSS can accomplish this kind of thing very nicely. So, our condition 1 values are now in columns B:C and condition 2 values are in columns D:E. All that remains to do now would be to calculate summary statistics (means, variance, standard deviations, whatever; taking care that our 0 values are coded as missing, and not included in the calculations) for each set of columns (i.e. each condition) and perform the inferential tests of your choice (in this case, with only two within-subject conditions, it would be a paired t-test).

Next time, I’ll use a set of real reaction time data and do these procedures (and others) using SPSS, in order to illustrate some more sophisticated ways of handling outliers than just the simple high and low cutoffs detailed above.

TTFN.

Is there a gender gap in computing skills?

The bad old days. And no, that really is not a computer. I honestly don't know what's worse here, the outrageous sexism, or the horrific ghastliness of that beige plastic thing.

It being International Women’s Day today got me thinking about sex and computers. No, not like that, get your mind out of the gutter, I mean in terms of differences between males and females in our attitudes towards and interactions with technology. Such differences (if they exist) might be pertinent in a field like psychology, where the majority of undergraduates (often with ratios approaching 10:1) are female, but (as in most other fields) the majority of professors are male. By contrast computer science undergraduate courses are overwhelmingly male-dominated.

Obviously there are a whole host of social/economic/gender-political reasons why this might be the case, and one would hope that the balance these days might be shifting ever closer towards a more equal representation of the two sexes at all levels and fields in science. However, given that the majority of undergraduate psychologists are girls, and successful post-graduate research is to an extent dependent on computer skills, systematic differences in the way the two halves of the population treat and interact with computers might be worth paying attention to.

So, do systematic differences exist? The short answer, is… I’m not sure. Anecdotally, I’ve known plenty of people of both sexes who are programming ninjas, and equally, plenty of both sexes who are utterly hopeless with technology. In writing this piece I’ve tried to take a (quick) glance at some relevant research, but honestly, it seems a bit of a mess. There are quite a few studies out there, but a lot of them are old (I mean, old in terms of the computer industry – like pre-mid-90s) and things have clearly changed since then, particularly for the generation of ‘digital natives‘ that make up today’s undergraduate cohorts. One older meta-analytic study (from 1998) reported that gender differences in beliefs about computers and behaviour related to them were negligible, while finding that males showed more self-efficacy and more positive affect related to technology. A more recent (2007) study in a population of Greek school children reported similar results regarding self-efficacy. Another recent (2010) study (PDF) on internet use in Taiwanese students reported that boys and girls differed in the manner in which they used the internet – boys were more exploratory users of the web, while girls were more communicative users. This finding was also shown in a survey of male and female US college students from 2009. This study also revealed some other points of discrimination between the sexes in their internet use, with males showing a heavier usage pattern overall. However, female students spend a higher proportion of their time online actually doing academic work; males spent more time using the internet for leisure-related activities (checking sports scores, downloading music, visiting *ahem* ‘adult’ sites etc.).

The most recent, and perhaps most relevant study I found is from 2011 (PDF), and is a survey of Accountancy students who, like psychology, show a heavy female bias in their numbers. This study found a difference in attitudes early in the curriculum, but the gender difference disappeared on a more advanced course. This is good news, as it might suggest that some of the differences found in previous research have reduced or disappeared, perhaps as a result of the greater penetration of computers into everyday life.

The computer industry and the way we use its products changes in a heartbeat, and I can appreciate the problems involved in doing research which might seem out of date almost as soon as it’s published (a search for “gender differences +iPad” on Google scholar turns up nothing), nonetheless there seems to be a real paucity of research here. Most of the studies I found involve surveys on attitudes to computers, rather than skills – presumably because skills are harder to assess. Whatever differences there are between the sexes when it comes to technology (if there are any at all) we need to make sure that we’re giving the next generation of students of both sexes the training they need to be effective researchers, clinicians and members of the workforce.

How to use Google effectively – an infographic.

I’ve written before about how to effectively search for information on the internet, however I just found a fantastic infographic from a site called Hack College. There’s some other useful-looking stuff on the site too – it’s tips/resource site for (American) college students. Anyway, the infographic is reproduced below (click for full-size version) and if you don’t already know everything in it, then consider yourself duly chastised, you young scamp.

Read the rest of this entry

Smartphones in psychology – how will you use yours?

This is the future. Oh yes, it is.

I’ve been thinking about doing a piece on smartphones in psychology for a while now – it seemed apposite given the death of Steve Jobs, and the release of the iPhone 4s – however the BPS research digest has just beaten me to it with a post entitled “Steve Jobs gift to cognitive science”. They cite several studies which have used several different kinds of smartphones (mostly iPhones) either to collect data using specific tasks or in some other way (monitoring activity/movement). The BPS article highlights applications of smartphones in research, but a quick search of the interwebs reveals that the studies it cites are just the tip of an ever-growing iceberg of ways in which people are using this technology.

First, there are the studies which use people’s reactions to the iPhone as a tool to examine some aspect of cognitive function – this one for instance, is concerned with the phenomenon of evaluative conditioning, but uses the central question of why people like the iPhone as a way of examining the literature.

Second, there are the studies which use the computing power of smartphones (which nowadays are seriously capable computing platforms) to instantiate some kind of psychologically relevant function. This article uses the iPhone as a platform for a novel evolutionary algorithm which detects multiple human faces, and has applications in robot visual systems.

Next there are the apps which aim to provide some kind of therapy, and there are a lot of these. Here are two which claim to provide CBT therapy on the iPhone: CBTreferee and iCouch. This article discusses the use of an app which aims to promote behavioural management of migraines in adolescents, while this one is a review of the iRecovery addiction recovery app, in the context of sex addiction. Needless to say, a great deal of work clearly needs to be done in evaluating whether and how these kinds of tools could be used clinically, and my mentioning them here is just to point out their existence, and definitely should not be taken as any kind of endorsement.

Then there are the massive numbers of psychology e-books which are now available through the Apple iTunes store and various other outlets (the Android market, Amazon Kindle store etc.). Many of the ‘classics’ in psychology by authors like Freud or Havelock-Ellis are available for free, and there are also a huge number of modern textbooks available. By far the most eye-watering ones that pop up are an (apparently) exhaustive six-volume treatise on “The psychology of adult spanking” I’ll say nothing else about that, except caveat emptor.

A special last mention has to go to a bunch of researchers at the Technical University of Denmark who have demonstrated a working version of a smartphone brain scanner. Using a wireless EEG headset and a Nokia N900 they’ve been able to instantiate real-time visualisation and brain-state decoding in a totally mobile package. Pretty mind-blowing stuff – the video below shows various demos, and is well worth a watch.

Whatever comes along in the future, it’s clear that mobile computing platforms like smartphones are not going away anytime soon, and in fact they may even become the dominant computing platform before too long. Researchers and therapists would be well advised to engage with the technology as soon as they can.

TTFN.

How to do research on the internet – Google Scholar and other databases

So, you’ve got a lovely juicy essay/paper or research project to write, and instead of spending hours going through card catalogs in the library you obviously want to get your research done in the fastest way possible – on the internet. Here are my best tips for finding material for a paper or essay using online databases. In a nutshell – there’s more to finding information on the web than just typing some keywords into Google and using what pops out on the first page of results.

As a general rule, you should familiarise yourself with what databases are available to you – some are totally open-access, while others require some kind of subscription. Most universities and colleges subscribe to a lot of them, and you can usually find links to available databases on your college’s library website.

The way I usually start is with a couple of really general search terms on Google Scholar. You could start off searching on Google’s regular web search page, but all you generally get there is the wikipedia page, and you wouldn’t be stupid enough to reference wikipedia in an essay, would you? Of course not. Say your essay is on working memory – so stick “working memory +review” (without the quotes) into google scholar. This gets you about 2.2 million hits! Change the middle drop-down box at the top to restrict your search to the last, say, five years and then you have less than half a million references to go through – easy. Of course you don’t have to look through half a million papers – the great thing about Google Scholar is that it ranks things in order of ‘influence’ – which roughly translates as the number of times that paper has been cited by other papers. While there’s lots of arguments about exactly what this means in terms of a paper’s genuine influence, influential papers tend to get more cited than others, so it’s a reasonable metric. Hopefully you’ve got a couple of good review papers there on the first couple of pages that will get you into the topic. The other really great thing about Google Scholar is that it links directly to PDFs of the papers (when they’re available on the net) which enables you to directly download the papers with a single (right-)click. Of course if you use reference management like Mendeley (and if you don’t, you’re an idiot)  you can also import references from Google Scholar directly into your library. Here’s a nice page which talks about some advanced tips and tricks for getting the most out of Google Scholar. Read the rest of this entry