Prediction is very difficult, especially about the future.
– Neils Bohr
The way I use computing devices is currently something of a mess. I regularly work in two different locations and have a desktop machine at each place, plus a high-powered desktop and a lower-powered media PC at home, which all run Windows. I have a MacBook Pro which runs OS X (and, occasionally, Windows through Parallels), plus an iPhone, and I sometimes use my Wife’s iPad (both iOS, of course) and will probably get one myself at some point (or maybe a Kindle, not sure). Plus, there are a couple of desktop machines which I use fairly regularly in different labs for running experiments (Windows). All told then, there are roughly eight or nine different computing devices which I regularly use, with three or four different operating systems. Managing files and data so that what I need is accessible on any particular device at any point in time is a massive hassle. What I’ve been doing for the last two years is an ad-hoc mixture of cloud-based solutions (GMail, Google Docs, Evernote, Mendeley) and carrying around a 500Gb USB hard-drive which contains all my documents and experimental data. Wherever I am, I plug in my hard drive and have everything I need, and I don’t store anything locally on any of the machines.
This solution kind-of works, but is unsatisfactory in a number of ways. Firstly, it’s insecure – I’m reasonably careful about doing regular backups, but I live in constant terror of my USB hard-drive being lost, or just breaking. Secondly, I still have to deal with different operating systems and environments – I tend to take my MacBook everywhere with me as there are some Unix applications I use for data analysis that don’t work well on my (desktop) Windows machines. This pretty much defeats the purpose of having all my data on the (much more portable) USB hard drive. Thirdly, getting data on and off the iOS devices is a mega-hassle because of Apple’s teeth-grindingly-awful sync-everything-through-iTunes system.
The brain is like a computer; this is the fundamental metaphor at the heart of 1980s cognitive psychology. To an extent this was a useful way of thinking about the brain, it certainly stores and processes information just like a computer, and you can even (perhaps) draw some rough parallels between parts of the brain and computer components.
However, in at least one important respect, the brain appears to function very differently from a computer. A computers’ processing power is highly centralised in a single processor (or perhaps a dual/quad core processor – doesn’t matter – still centralised). The processor does all the computational work, and the hard disk stores all the data that the processor works on. This means that data is constantly being shuttled back and forth from the hard disk to the processor (using the RAM as an intermediary, to avoid the hard disk spinning up and down all the time) and this transfer of data is slow, inefficient and creates a bottleneck which restricts the maximum speed at which computers can run. Read the rest of this entry
We have the tools, we have the talent!
– Winston Zeddemore
In the last post I talked about timing of experiments in general, and mentioned that timing of responses is critical for success in a reaction time-type experiment. This topic will discuss some of the options for hardware that’s available for collecting responses.
The simplest solution is just to use the human-interface hardware which you’ve already got i.e. get your participants to click a mouse button, or a key on the keyboard as their response. However, there are several reasons why this will generally be undesirable. Most mice and keyboards on modern computers connect via USB, and this introduces a slight lag. The computer ‘looks at’ or ‘polls’ its USB-connected peripherals at a standard rate of 125Hz (meaning 125 times per second, or every 8 milliseconds). This means that if you make a response, there may be a (variable) lag of anywhere between 0 and 8 ms between the response and the computer actually ‘seeing’ it. USB-keyboards have a similar problem. In addition, mice (and keyboards) have a lot of internal circuitry which can also introduce timing lags of variable durations. This paper (Plant et al., 2003) presents the results of some bench-testing of a set of mice. The best mouse they tested had a minimum button-down latency of just 0.4ms, whereas the worst one showed a minimum lag of 48ms! Running timing-critical experiments with the latter mouse (where the effect you might be chasing could be in the range of 20-30ms) would clearly be disastrous. Read the rest of this entry
There is a tide in the affairs of men.
Which, taken at the flood, leads on to fortune;
Julius Caeser, Act 4, scene III
This will be the first in a (probably fairly lengthy) series of posts about how to design, program and run a successful psychology experiment. For this initial post I want to go over some basics about how computers work, and what that means for running successful experiments. Of course there are many different kinds of experiments that it’s possible to run, but for the purposes of the present discussion I’m going to use as an example a canonical kind of cognitive experiment, where the dependent variable is reaction time, measured using a button-press. This kind of experiment is still widely-used, in paradigms like the dot-probe attentional task, and the Implicit Association Test.
The first thing you need to understand is that modern operating systems are very, very complicated. When you boot up Windows (for example) all you see at first is a nice, clean, uncluttered desktop, but examining the Windows Task Manager reveals a whole host of ‘background’ processes which are buzzing away invisibly all the time. These might be search indexers, printer services, network connections, anti-virus software, firewalls, and a whole mess of other stuff. If you then open a few different applications (a web browser with a few tabs open, Microsoft Word, an email program, some instant messaging application) the computer has a few more processes to juggle, as well as all the background stuff. This is normally fine, as long as you have enough RAM to handle all the requirements of these different processes – modern OSs are multi-tasking, meaning they can handle having lots of things going on at once. Read the rest of this entry
Statistics – the very word is guaranteed to bring a shudder of terror to the average undergraduate, and even full-grown lecturers have been known to quake in fear before its awesome power. Most psychology undergraduates don’t come from a hard-science or mathematics background, and statistics are probably the number one thing that they struggle with during their psychology courses. Personally, I got through my undergraduate stats exams with a mixture of vague understanding and rote memorisation, and it was only during my PhD that I actually started learning how to do things properly and, more importantly, actually understanding what I was doing, and why.
This is not the place to give any detailed information on the basics of statistics. That kind of material has been covered many, many times before by people infinitely more qualified than I. For that kind of stuff, a good place to start would be Andy Field’s book, available here. Andy explains things very clearly and is actually a very nice chap as well. What I’d like to do instead is do a quick run-down of popular stats software, and point out some resources which can help if you run into trouble. Read the rest of this entry