Blog Archives

3D printed response box. The future. It’s here.

photo

 

This may not look like much, but it’s actually pretty cool. It’s a new five-button response-box being built for our MRI scanner by one of the technicians where I work. The cool thing is that the main chassis has been custom-designed, and then fabricated using a polyethylene-extruding 3D printer. The micro-switches for the four fingers and thumb and the wired connections for each have obviously been added afterwards.

3D printing in plastic is a great way of creating hardware for use in the MRI environment, as, well… it’s plastic, and it can create almost any kind of structure you can think of. We sometimes need to build custom bits of hardware for use in some of our experiments, and previously we’d usually build things by cutting plastic sheets or blocks to the required shape, and holding them together with plastic screws. Using a 3D-printer means we can produce solid objects which are much stronger and more robust, and they can be produced much more quickly and easily too. I love living in the future.

Advertisements

High-tech version of the hollow-face illusion

I just accidentally made a kind-of version of the classic hollow-face illusion, using an anatomical MRI scan of my own head and Osirix. I exported a movie in Osirix, saved the movie as an image sequence using Quicktime, and then assembled it into an animated GIF using GIMP.

MW_3D_MIP_2 001_small

Click the image for a much bigger (11mb!) version.

This is a maximum intensity projection, and because of the way MIPs work, it appears that the head is rotating 180 degrees to the left, and then the direction switches and it rotates back 180 degrees to the right. In actual fact, the image is rotating constantly in one direction, as can be seen by looking at the cube on the top left which cycles through L (Left), P (Posterior), R (Right), and A (Anterior). It’s not really a hollow-face illusion as the effect is pretty much an artefact of the MIP, but still, I thought it was cool.

Comment on the Button et al. (2013) neuroscience ‘power-failure’ article in NRN

Statistical Spidey knows the score.

Statistical Spidey knows the score.

An article was published in Nature Reviews Neuroscience yesterday which caused a bit of a stir among neuroscientists (or at least among neuroscientists on Twitter, anyway). The authors cleverly used meta-analytic papers to estimate the ‘true’ power of an effect, and then (using the G*Power software) calculated the power for each individual study that made up the meta-analysis, based on the sample size of each one. Their conclusions are pretty damning for the field as a whole: an overall value of 21%, dropping to 8% in some sub-fields. This means that out of 100 studies that are conducted into a genuine effect, only 21 will actually demonstrate it.

The article has been discussed and summarised at length by Ed Yong, Christian Jarrett, and by Kate Button (the study’s first author) on Suzy Gage’s Guardian blog, so I’m not going to re-hash it any more here. The original paper is actually very accessible and well-written, and I encourage interested readers to start there. It’s definitely an important contribution to the debate, however (as always) there are alternative perspectives. I generally have a problem with over-reliance on power analyses (they’re often required for grant applications, and other project proposals). Prospective power analyses (i.e. those conducted before a piece of research is conducted, in order to tell you how many subjects you need) use an estimate of the effect size you expect to achieve – usually derived from previous work that has examined a (broadly) similar problem using (broadly) similar methods. This estimate is essentially a wild shot in the dark (especially because of some of the issues and biases discussed by Button et al., that are likely to operate in the literature), and the resulting power analysis therefore tells you (in my opinion) nothing very useful. Button et al. get around this issue by using the effect size from meta-analyses to estimate the ‘true’ effect size in a given literature area – a neat trick.

The remainder of this post deals with power-issues in fMRI, since it’s my area of expertise, and necessarily gets a bit technical. Readers who don’t have a somewhat nerdy interest in fMRI-methods are advised to check out some of the more accessible summaries linked to above. Braver readers – press on!

An alternative approach used in the fMRI field, and one that I’ve been following when planning projects for years, is a more empirical method. Murphy and Garavan (2004) took a large sample of 58 subjects who had completed a Go/No-Go task and analysed sub-sets of different sizes to look at the reproducibility of the results, with different sample sizes. They showed that reproducibility (assessed by correlation of the statistical maps with the ‘gold standard’ of the entire dataset; Fig. 4) reaches 80% at about 24 or 25 subjects. By this criterion, many fMRI studies are underpowered.

While I like this empirical approach to the issue, there are of course caveats and other things to consider. fMRI is a complex, highly technical research area, and heavily influenced by the advance of technology. MRI scanners have significantly improved in the last ten years, with 32 or even 64-channel head-coils becoming common, faster gradient switching, shorter TRs, higher field strength, and better field/data stability all meaning that the signal-to-noise has improved considerably. This serves to cut down one source of noise in fMRI data – intra-subject variance. The inter-subject variance of course remains the same as it always was, but that’s something that can’t really be mitigated against, and may even be of interest in some (between-group) studies. On the analysis side, new multivariate methods are much more sensitive to detecting differences than the standard mass-univariate approach. This improvement in effective SNR means that the Murphy and Garavan (2004) estimate of 25 subjects for 80% reproducibility may be somewhat inflated, and with modern techniques one could perhaps get away with less.

The other issue with the Murphy and Garavan (2004) approach is that it’s not very generalisable. The Go/No-Go task is widely used and is a ‘standard’ cognitive/attentional task that activates a well-described brain network, but other tasks may produce more or less activation, in different brain regions. Signal-to-noise varies widely across the brain, and across task-paradigms, with simple visual or motor experiments producing very large signal changes and complex cognitive tasks smaller ones. Yet another factor is the experimental design (blocked stimuli, or event-related),  the overall number of trials/stimuli presented, and the total scanning time for each subject, all of which can vary widely.

The upshot is that there are no easy answers, and this is something I try to impress upon people at every opportunity; particularly the statisticians who read my project proposals and object to me not including power analyses. I think prospective power analyses are not only uninformative, but give a false sense of security, and for that reason should be treated with caution. Ultimately the decision about how many subjects to test is generally highly influenced by other factors anyway (most notably, time, and money). You should test as many subjects as you reasonably can, and regard power analysis results as, at best, a rough guide.

Warhol brains.

Here is a pretty Warhol-esque picture I made using a) my own head, b) a Siemens Verio MRI scanner, c) Osirix and d) GIMP.

(Clicky for bigness)

Free, interactive MRI courses from Imaios.com (plus lots of other medical/anatomy material too)

A very quick post to point you towards a really fantastic set of online, interactive courses on MRI from a website called Imaios.com – a very nice, very slick set of material. The MRI courses are all free, but you’ll need to register to see the animations. Lots of other medical/anatomy-related courses on the site too – some free, some ‘premium’, and some nice looking mobile apps too.

A scanner-shaped cake (MRI, maybe PET/CT?)

Made for the occasion of the first birthday of Imanova last night, and greedily consumed by slightly hungover scientists this morning.

Two like, *totes* awesome websites: ViperLib and mindhive

I’ve come across a couple of more web-links which I thought were important enough to share with you straight away rather than saving them up for a massive splurge of links.

The first is ViperLib, a site which focusses (geddit?) on visual perception and is run by Peter Thompson and Rob Stone of the University of York, with additional input (apparently) from Barry the snake. This is essentially a library of images and movies related to vision science, and currently contains a total of 1850 images – illusions, brain scans, anatomical diagrams, and much more. Registration is required to view the images, but it’s free and easily done, and I would encourage anyone to spend an hour or so of their time poking around amongst the treasures there. I shall be digging through my old hard drives when I get a chance and contributing some optic-flow stimuli from my old vision work to the database.

The second is for the (f)MRI types out there; a fantastic ‘Imaging Knowledge Base’ from the McGovern Institute for Brain Research at MIT. The page has a huge range of great information about fMRI design and analysis, from the basics of Matlab, to how to perform ROI analyses, and all presented in a very friendly, introductory format. If you’re just getting started with neuroimaging, this is one of the best resources I’ve seen for beginners.

fMRI Software (FSL, SPM, BrainVoyager) for beginners – how to choose?

Functional Magnetic Resonance Imaging (fMRI) has now become a pretty mainstream activity for researchers interested in the workings of the human brain, and since its inception in the early-90s a whole load of software has been developed which can enable even the most clueless or Unix-averse researcher to (reasonably) easily perform complex analyses on fMRI datasets. I wrote a brief earlier post about fMRI software based on a presentation, and thought I’d expand on it a little more in a future series. There’s obviously a great deal to say about these pieces of software in terms of advanced features, UI etc. and I’ll get to all that at some point in the future. This post will focus on the very basic aspects of three popular choices for fMRI analysis: BrainVoyager, FSL and SPM*; what platforms they support, and the basic features of each. Read the rest of this entry

2D maps of brain connectivity

Just a quickie – found this site, which has some awesome google-maps style interfaces for Diffusion Tensor Imaging (DTI) data, showing neural connectivity in the brain. Very nice. Also a downloadable application which looks very nice too. Worth checking out.

Best iPhone/iPad/iPod Touch apps for psychology students

The iPhone is much more than just a phone – it’s a powerful mobile computing platform which has completely changed the way  we interact with our mobile devices. If you’re a student who has one (or an iPod touch, or even an iPad, you lucky, lucky thing) there are many ways you can use it to make your life easier.

Mendeley. If you use Mendeley (and if you’re any kind of student and you don’t use it, or something like it, then you’re basically nuts) then a download of their free app is a must. The app connects to your online library of references and allows you full access to any PDFs you’ve synced to their servers for download and reading. You can sync papers to your library using the desktop version and read them later on your iPhone or iPad. Sweet. And it’s free! Read the rest of this entry