Category Archives: Uncategorized

Stimulus timing accuracy in PsychoPy – an update, and an example of open science in action

A few days ago I reported on a new paper that tested the timing accuracy of experimental software packages. The paper suggested that PsychoPy  had some issues with presenting very fast stimuli that only lasted one (or a few) screen refreshes.

However, the creator of PsychoPy (Jon Peirce, of Nottingham University) has already responded to the data in the paper. The authors of the paper made all the data and (importantly) the program scripts used to generate it available on the Open Science Framework site. As a result it was possible for Jon to examine the scripts and see what the issue was. It turns out the authors used an older version of PsychoPy, where the ability to set a stimulus duration in frames (or screen refreshes) wasn’t available. Up-to-date versions have this feature, and as a result are now much more reliable. Those who need the full story should read Jon’s comment (and the response by the authors) on the PLoS comments page for the article.

So, great news if, like me, you’re a fan of PsychoPy. However, there’s a wider picture that’s revealed by this little episode, and a very interesting one I think. As a result of the authors posting their code up to the OSF website, and the fact that PLoS One allows comments to be posted on its articles, the issue could be readily identified and clarified, in a completely open and public manner, and within a matter of days. This is one of the best examples I’ve seen of the power of an open approach to science, and the ability of post-publication review to have a real impact.

Interesting times, fo’ shiz.

Some new bits of stats software and some miscellaneous links

the-linksHi kids. Two new pieces of stats/plotting software for you, plus some other stuff.

First up is a new (to me, anyway) scientific plotting package called Veusz. It’s written in Python, is completely free and open-source, works with any OS, and basically looks pretty useful. I’ve been using Prism for a while now, but I’ll definitely try out Veusz next time I need to do some plotting – would prefer to use something open-source.

The new statistics software is called Wizard, and is unfortunately a paid application, and just developed for Macs. If you’re dissatisfied with SPSS (and let’s be honest, who isn’t?) it might be worth the $79 price though. Haven’t tried it out yet personally, but it looks really, really nice in terms of the interface, and seems fairly comprehensive in terms of features as well. Definitely one to think about for Mac users.

Next up is a new reference manager called Paperpile. I’m a big fan of Mendeley, but some of Paperpile’s features are pretty attractive – it lives as a Chrome extension, and uses Google Drive for online storage of the PDFs. Pretty nice. Unfortunately it’s still in a private Beta phase and will cost $29/per year when it’s released.

I was thinking about a new web-page recently, and solicited opinions for which was the current best build-me-a-free-website service. The extremely helpful @Nonmonotonix suggested using Github Pages to both design and host sites – looks like an excellent system. He even wrote a set of instructions on his blog here, for how to get started with Github pages. Another good suggestion was something called Bootstrap, which has the promising tagline “By nerds, for nerds.”

Lastly, a couple of packages for neuroimagers. I’ve just been made aware of a really good collaborative, open-source software project for the analysis of EEG/MEG data – called BrainStorm. Looks like a very capable suite of tools. I’ve also just come across the PyMVPA project, which does exactly what it says on the tin – Multivariate Pattern Analysis in Python. Nice.

All of these links, and many, many more can of course be found on my newly-updated Links page.

Toodle-oo.

Why every student needs a Google account

Google_student

This post might seem a trifle umm… politically insensitive after recent revelations in the UK about exactly how much corporation tax Google pays (answer – basically none), but I’ve been planning it for a while, and unlike Starbucks (which should be boycotted at all costs, because their coffee sucks) Google is a little harder to avoid, and actually provides a whole slew of incredibly worthwhile, and mostly free, services. One of the first things you should do when you start an undergraduate course at a college/university is sign up for a Google account. Here’s why:

1. Gmail
You’ve probably already got an email address, but if you’re not using Gmail then you need to switch. The interface is brilliantly usable and customisable, and you get a massive 10Gb of storage for all your mail – more than you’ll likely ever need. The most important benefit though, is Gmail’s ability to pull all your current and future email accounts together in one place. Gmail can be set up as a POP3 client (here’s how) meaning it can pull email in from several different accounts and present it all in one inbox. You’ve probably got an account already, you’ll definitely get an account on your university’s servers, and when you leave and either go on to postgraduate study (maybe at a different university) or get a job, you’ll almost certainly get given yet another account. Gmail can centralize everything, and mean that you only have to check one inbox for all your accounts. You can even configure it so that it sends mail through, say, your university account by default, so people you contact see your ‘official’ email address. I’ve currently got five email accounts configured to read through Gmail, and I honestly couldn’t manage without it. Additionally, if you start using Gmail from day one, all your contacts and mail are saved in your Gmail account, and won’t be lost when you complete your course and your university account inevitably gets cancelled/deleted. Another benefit of Gmail is its ease of use with various smartphone platforms. Android (obviously) and iOS devices are designed to sync up with Google accounts pretty much seamlessly.

So, set up a Gmail account, and assume it’ll be your email address for life. Be sensible. Don’t choose a username like sexyluvkitten69@gmail.com, or gangzta4life@gmail.com – choose something you’ll be happy to put on a CV when you leave college, i.e. something that pretty much consists of your name.

2. GDrive/Docs
In one sense, Google Drive is a simple online storage locker for any kinds of files you like, a bit like Dropbox, or any of the other similar services which have proliferated recently. You get 5Gb of free space, and it’s easy to set up file sharing for specific other users, or to make your files available for download to anyone you send a link to. In another sense, it’s a full-featured web-based alternative to Microsoft Office, with the ability to create/edit documents, spreadsheets or presentations online, collaborate on them simultaneously with other users, and download them in a variety of the usual formats. Use it for just backing important things up, or use the full ‘Docs’ features – it’s up to you.

One other incredibly powerful feature of Google docs are the forms tools. These can be used to create online forms – the best way I currently know of to create online questionnaires for research purposes. The data from the questionnaires all gets dumped into a google docs spreadsheet for easy analysis too – very cool. This page has some good tips.

3. Google Scholar
Google Scholar is pretty much my first port-of-call for literature searches these days, and is often the best way of looking up papers quickly and easily. Yes, for in-depth research on a particular topic then you still need to look at more specialised databases, but as a first-pass tool, it’s fantastic. You can use it without being logged in with a Google account, but if you’re a researcher, you can get a Google Scholar profile page – like this: Isaac Newton’s Google Scholar profile page (only an h-index of 33 Isaac? Better get your thumb out of your arse for the REF old boy). This is the best way to keep track of your publications and some simple citation metrics.

4. Google Calendar
Yes, you need to start using a calendar. Google calendar can pull together several calendars together into one, sync seamlessly with your ‘phone, and send you alerts and emails to make sure you never miss a tutorial or lecture again. Or at least, you never miss one because you just forgot about it.

5. Blogger
Blogger is owned by Google, so if you want to start a blog (and it’s something you should definitely think about), all you need to do is go to blogger and hit a few buttons – simples. That’s the easy bit – then you actually have to write something of course…

6. Google Sites
Probably the easiest way to create free websites – as for Blogger above, you can literally create a site with a few clicks. Lots of good free templates that you can use and customise.

7. Google+
Yes, I know you use Facebook, but Google+ is the future. Maybe. The video hangouts are cool, anyway.

8. Other things
Use your Google account to post videos to YouTube, save maps/locations/addresses in Google Maps, find like-minded weirdos who are into the same things as you on Google Groups, read RSS feeds using Google Reader, and oooh… lots of other things.

Honestly, the feature of Gmail should be inducement enough for everyone to sign up for a Google account, the rest is just a bonus. Get to it people – it’s never too late to switch.

TTFN.

***UPDATE***

Following a couple of comments (below, and on Twitter) I feel it necessary to qualify somewhat my effusive recommendation of Google. Use of Google services inevitably involves surrendering personal information and other data to Google, which is a large corporation, and despite these services being free at the point of use, it should always be remembered that the business of corporations is to deliver profits. Locking oneself into a corporate system should be considered carefully, no matter how ‘convenient’ it might be. This article from Gizmodo is worth a read, as is this blog post from a former Google employee.

How to pilot an experiment

I got a serious question for you: What the fuck are you doing? This is not shit for you to be messin’ with. Are you ready to hear something? I want you to see if this sounds familiar: any time you try a decent crime, you got fifty ways you’re gonna fuck up. If you think of twenty-five of them, then you’re a genius… and you ain’t no genius.
Body Heat (1981, Lawrence Kasdan)

To consult the statistician after an experiment is finished is often merely to ask him to conduct a post-mortem examination. He can perhaps say what the experiment died of.
R.A. Fisher (1938)

Don’t crash and burn your experiment.

Doing a pilot run of a new psychology experiment is vital. No matter how well you think you’ve designed and programmed your task, there are (almost) always things that you didn’t think of. Going ahead and spending a lot of time and effort collecting a set of data without running a proper pilot is (potentially) a recipe for disaster. Several times I’ve seen data-sets where there was some subtle issue with the data logging, or the counter-balancing, or something else, which meant that the results were, at best,  compromised, and at worst completely useless.

All of the resultant suffering, agony, and sobbing could have been avoided by running a pilot study in the right way. It’s not sufficient to run through the experimental program a couple of times; a comprehensive test of an experiment has to include a test of the analysis as well. This is particularly true of any experiment involving methods like fMRI/MEG/EEG where a poor design can lead to a data-set that’s essentially uninterpretable, or perhaps even un-analysable. You may think you’ve logged all the data variables you think you’ll need for the analysis, and your design is a work of art, but you can’t be absolutely sure unless you actually do a test of the analysis.

This might seem like over-kill, or a waste of effort, however, you’re going to have to design your analysis at some point anyway, so why not do it at the beginning? Analyse your pilot data in exactly the way you’re planning on analysing your main data, save the details (using SPSS syntax, R code, SPM batch jobs – or whatever you’re using) and when you have your ‘proper’ data set, all you’ll (in theory) have to do is plug it in to your existing analysis setup.

These are the steps I normally go through when getting a new experiment up and running. Not all will be appropriate for all experiments, your mileage may vary etc. etc.

1. Test the stimulus program. Run through it a couple of times yourself, and get a friend/colleague to do it once too, and ask for feedback. Make sure it looks like it’s doing what you think it should be doing.

2. Check the timing of the stimulus program. This is almost always essential for a fMRI experiment, but may not be desperately important for some kinds of behavioural studies. Run through it with a stopwatch (the stopwatch function on your ‘phone is probably accurate enough). If you’re doing any kind of experiment involving rapid presentation of stimuli (visual masking, RSVP paradigms) you’ll want to do some more extensive testing to make sure your stimuli are being presented in the way that you think – this might involve plugging a light-sensitive diode into an oscilloscope, sticking it to your monitor with a bit of blu-tack and measuring the waveforms produced by your stimuli. For fMRI experiments the timing is critical. Even though the Haemodynamic Response Function (HRF) is slow (and somewhat variable) you’re almost always fighting to pull enough signal out of the noise, so why introduce more? A cumulative error of only a few tens of milliseconds per trial can mean that your experiment is a few seconds out by the end of a 10 minute scan – this means that your model regressors will be way off – and your results will likely suck.*

3. Look at the behavioural data files. I don’t mean do the analysis (yet), I mean just look at the data. First make sure all the variables you want logged are actually there, then dump it into Excel and get busy with the sort function. For instance, if you have 40 trials and 20 stimuli (each presented twice) make sure that each one really is being presented twice, and not some of them once, and some of them three times; sorting by the stimulus ID should make it instantly clear what’s going on. Make sure the correct responses and any errors are being logged correctly. Make sure the counter-balancing is working correctly by sorting on appropriate variables.

4. Do the analysis. Really do it. You’re obviously not looking for any significant results from the data, you’re just trying to validate your analysis pipeline and make sure you have all the information you need to do the stats. For fMRI experiments – look at your design matrix to see that it makes sense and that you’re not getting warnings about non-orthogonality of the regressors from the software. For fMRI data using visual stimuli, you could look at some basic effects (i.e. all stimuli > baseline) to make sure you get activity in the visual cortex. Button-pushing responses should also be visible as activity in the motor cortex in a single subject too – these kinds of sanity checks can be a good indicator of data quality. If you really want to be punctilious, bang it through a quick ICA routine and see if you get a) component(s) that look stimulus-related, b) something that looks like the default-mode network, and c) any suspiciously nasty-looking noise components (a and b = good, c = bad, obviously).

5. After all that, the rest is easy. Collect your proper set of data, analyse it using the routines you developed in point 4. above, write it up, and then send it to Nature.

And that, ladeez and gennulmen, is how to do it. Doing a proper pilot can only save you time and stress in the long run, and you can go ahead with your experiment in the certain knowledge that you’ve done everything in your power to make sure your data is as good as it can possibly be. Of course, it still might be total and utter crap, but that’ll probably be your participants’ fault, not yours.

Happy piloting! TTFN.

*Making sure your responses are being logged with a reasonable level of accuracy is also pretty important for many experiments, although this is a little harder to objectively verify. Hopefully if you’re using some reasonably well-validated piece of software and decent response device you shouldn’t have too many problems.

A scanner-shaped cake (MRI, maybe PET/CT?)

Made for the occasion of the first birthday of Imanova last night, and greedily consumed by slightly hungover scientists this morning.

How to Google yourself – an infographic

I previously posted a really helpful infographic on how to use Google effectively to find information, and the creators of that one have just put up a new one, titled ‘The Google Yourself Challenge’. I’m sure we’ve all at one time or another guiltily probed the internet with our own name in order to see what’s out there, but managing our online identity is something we should all take seriously these days, particularly if we’re looking for a job or applying for post-graduate positions, and it can be really helpful to see what’s out there about yourself. You can see the original page with the infographic here but I’ve also reproduced it below (clicky for bigness):

Video analysis software – Tracker

I just came across a gosh-darn drop-dead cool (and free!) piece of software that I just had to write a quick post on. It’s called Tracker, it’s cross-platform, open-source and freely available here.

In a nutshell, it’s designed for analysis of videos, and can do various things, like track the motion of an object across frames (yielding position, velocity and acceleration data) and generate dynamic RGB colour profiles. Very cool. As an example of the kinds of things it can do, see this post on Wired.com where a physicist uses it to analyse the speed of blaster bolts in Star Wars: Episode IV. Super-geeky I know, but I love it.

An example of some motion analyses conducted using Tracker

Whenever I see a piece of software like this I immediately think about what I could use it for in psychology/neuroscience. In this case, I immediately thought about using it for kinematic analysis – that is, tracking the position/velocity/acceleration of the hand as it performed movements or manipulates objects. Another great application would be for analysis of movie stimuli for use in fMRI experiments. Complex and dynamic movies could be analysed in terms of the movement (or colour) stimuli they contain and measures produced which represent movement across time. Sub-sampled versions of these measures could then be entered into a fMRI-GLM analysis as parametric regressors to examine how the visual cortex responds; with careful selection of stimuli, this could be quite a neat and interesting experiment.

Not sure I’ll ever actually need to use it in my own experiments, but it looks like a really neat piece of software which could be a good solution for somebody with a relevant problem.

TTFN.

Neuropolarbear on coding

Neuropolarbear just posted an interesting piece on his/her Giraffes, Elephants and Baboons blog, about the importance of learning some coding for graduate students in psychology/neuroscience. Needless to say, I couldn’t agree more and in fact the piece broadly echoes some of the points in my own previous post on the topic, as well as making some interesting and practical suggestions for teaching the right skills to future scientists – good stuff.

Nature Article on the Paperless Lab

Very quick post to point out an interesting article in Nature this week, on how some labs are going paperless for their record-keeping and management. The examples given go well beyond just using an iPad instead of a paper notebook though – well worth a read. You can find the article here.

E-textbooks – a tiny update.

The future - you can touch it.

I blogged the other day about e-textbooks and how they might change the way we study and consume information, and have just come across this page on the Nature site (via the never-less-than-excellent GrrlScientist). It’s an online biology textbook, published by Nature, full of beautiful illustrations, you can read it anywhere you have web-access, on any device, and it’s constantly updated, so it never goes out of date. The future – it’s here!