Category Archives: Software

Stimulus timing accuracy in PsychoPy – an update, and an example of open science in action

A few days ago I reported on a new paper that tested the timing accuracy of experimental software packages. The paper suggested that PsychoPy  had some issues with presenting very fast stimuli that only lasted one (or a few) screen refreshes.

However, the creator of PsychoPy (Jon Peirce, of Nottingham University) has already responded to the data in the paper. The authors of the paper made all the data and (importantly) the program scripts used to generate it available on the Open Science Framework site. As a result it was possible for Jon to examine the scripts and see what the issue was. It turns out the authors used an older version of PsychoPy, where the ability to set a stimulus duration in frames (or screen refreshes) wasn’t available. Up-to-date versions have this feature, and as a result are now much more reliable. Those who need the full story should read Jon’s comment (and the response by the authors) on the PLoS comments page for the article.

So, great news if, like me, you’re a fan of PsychoPy. However, there’s a wider picture that’s revealed by this little episode, and a very interesting one I think. As a result of the authors posting their code up to the OSF website, and the fact that PLoS One allows comments to be posted on its articles, the issue could be readily identified and clarified, in a completely open and public manner, and within a matter of days. This is one of the best examples I’ve seen of the power of an open approach to science, and the ability of post-publication review to have a real impact.

Interesting times, fo’ shiz.

A new paper on timing errors in experimental software

a.aaaA new paper just out in PLoS One (thanks to Neuroskeptic for pointing it out on Twitter) shows the results of some tests conducted on three common pieces of software used in psychology and neuroscience research: DMDX, E-Prime, and PsychoPy. The paper is open-access, and you can read it here. The aim was to test the timing accuracy of the software when presenting simple visual stimuli (alternating black and white screens). As I’ve written about before, accurate timing in experiments can often be of great importance and is by no means guaranteed, so it’s good to see some objective work on this, conducted using modern hardware and software.

The authors followed a pretty standard procedure for this kind of thing, which is to use a piece of external recording equipment (in this case the Black Box Tool Kit) connected to a photo-diode. The photodiode is placed on the screen of the computer being tested and detects every black-to-white or white-to-black transition, and the data is logged by the BBTK device. This provides objective information about when exactly the visual stimulus was displayed on the screen (as opposed to when the software running the display thinks it was displayed on the screen, which is not necessarily the same thing). In this paper the authors tested this flickering black/white stimulus at various speeds from 1000ms, down to 16.6667ms (one screen refresh or ‘tick’ on a standard 60Hz monitor).

It’s a nice little paper and I would urge anyone interested to read it in full; the introduction has a really nice review of some of the issues involved, particularly about different display hardware.  At first glance the results are a little disappointing though, particularly if (like me) you’re a fan of PsychoPy. All three bits of software were highly accurate at the slower black/white switching times, but as the stimulus got faster (and was therefore more demanding on the hardware) errors began to creep in. At times less than 100ms, errors (in the form of dropped, or added frames) begin to creep in to E-Prime and PsychoPy; DMDX on the other hand just keeps on truckin’, and is highly accurate even at the fastest-switch conditions. PsychoPy is particularly poor under the most demanding conditions, with only about a third of ‘trials’ being presented ‘correctly’ under the fastest condition.

Why does this happen? The authors suggest that DMDX is so accurate because it uses Microsoft’s DirectX graphics libraries, which are highly optimised for accurate performance on Windows. Likewise, E-Prime uses other features of Windows to optimise its timing. PsychoPy on the other hand is platform-agnostic (it will run natively on Windows, Mac OS X, and various flavours of Linux) and therefore uses a fairly high-level language (Python). In simple terms, PsychoPy can’t get quite as close to the hardware as the others, because it’s designed to work on any operating system; there are more layers of software abstraction between a PsychoPy script and the hardware.

Is this a problem? Yes, and no. Because of the way that the PsychoPy ‘coder’ interface is designed, advanced users who require highly accurate timing have the opportunity to optimise their code, based on the hardware that they happen to be using. There’s no reason why a  Python script couldn’t take advantage of the timing features in Windows that make E-Prime accurate too – they’re just not included in PsychoPy by default, because it’s designed to work on unix-based systems too. For most applications, dropping/adding a couple of frames in a 100ms stimulus presentation is nothing in particular to worry about anyway; certainly not for the applications I mostly use it for (e.g. fMRI experiments where, of course, timing is important in many ways, but the variability in the haemodynamic response function tends to render a lot of experimental precision somewhat moot). The authors of this paper agree, and conclude that all three systems are suitable for the majority of experimental paradigms used in cognitive research. For me, the benefits of PsychoPy (cross-platform, free licensing, user-friendly interface) far outweigh the (potential) compromise in accuracy. I haven’t noticed any timing issues with PsychoPy under general usage, but I’ve never had a need to push it as hard as these authors did for their testing purposes. It’s worth noting that all the testing in this paper was done using a single hardware platform; possibly other hardware would give very different results.

Those who are into doing very accurate experiments with very short display times (e.g. research on sub-conscious priming, or visual psychophysics) tend to use pretty specialised and highly-optimised hardware and software, anyway. If I ever had a need for such accuracy, I’d definitely undertake some extensive testing of the kind that these authors performed, no matter what hardware/software I ended up using. As always, the really important thing is to be aware of the potential issues with your experimental set-up, and do the required testing before collecting your data. Never take anything for granted; careful testing, piloting, examination of log files etc. is a potential life-saver.

TTFN.

 

Towards open-source psychology research

uncle-sam-open-source-311x400A couple of interesting things have come along recently which have got me thinking about the ways in which research is conducted, and how software is used in psychology research.

The first is some recent publicity around the Many Labs replication project  - a fantastic effort to try and perform replications of some key psychological effects, with large samples, and in labs spread around the world. Ed Yong has written a really great piece on it here for those who are interested. The Many Labs project is part of the Open Science Framework - a free service for archiving and sharing research materials (data, experimental designs, papers, whatever).

The second was a recent paper by Tom Stafford and Mike Dewar in Psychological Science. This is a really impressive piece of research from a very large sample of participants (854,064!) who played an online game. Data from the game was analysed to provide metrics of perception, attention and motor skills, and to see how these improved with training (i.e. more time spent playing the game). The original paper is here (paywalled, unfortunately), but Tom has also written about it on the Mind Hacks site and on his academic blog. The latter piece is interesting (for me anyway) as Tom says that he found his normal approach to analysis just wouldn’t work with this large a dataset and he was obliged to learn Python in order to analyse the data. Python FTW!

Anyway, the other really nice thing about this piece of work is that the authors have made all the data, and the code used to analyse it, publicly available in a GitHub repository here. This is a great thing to do, particularly for a large, probably very rich dataset like this – potentially there are a lot of other analyses that could be run on these data, and making it available enables other researchers to use it.

These two things crystallised an important realisation for me: It’s now possible, and even I would argue preferential, for the majority of not-particularly-technically-minded psychology researchers to perform their research in a completely open manner. Solid, free, user-friendly cross-platform software now exists to facilitate pretty much every stage of the research process, from conception to analysis.

Some examples: PsychoPy is (in my opinion) one of the best pieces of experiment-building software around at the moment, and it’s completely free, cross-platform, and open-source. The R language for statistical computing is getting to be extremely popular, and is likewise free, cross-platform, etc. For analysis of neuroimaging studies, there are several open-source options, including FSL and NiPype. It’s not hard to envision a scenario where researchers who use these kinds of tools could upload all their experimental files (experimental stimulus programs, resulting data files, and analysis code) to GitHub or a similar service. This would enable anyone else in the world who had suitable (now utterly ubiquitous) hardware to perform a near-as-dammit exact replication of the experiment, or (more likely) tweak the experiment in an interesting way (with minimal effort) in order to run their own version. This could potentially really help accelerate the pace of research, and the issue of poorly-described and ambiguous methods in papers would become a thing of the past, as anyone who was interested could simply download and demo the experiment themselves in order to understand what was done. There are some issues with uploading very large datasets (e.g. fMRI or MEG data) but initiatives are springing up, and the problem seems like it should be a very tractable one.

The benefit for researchers should hopefully be greater visibility and awareness of their work (indexed in whatever manner; citations, downloads, page-views etc.). Clearly some researchers (like the authors of the above-mentioned paper) have taken the initiative and are already doing this kind of thing. They should be applauded for taking the lead, but they’ll likely remain a minority unless researchers can be persuaded that this is a good idea. One obvious prod would be if journals started encouraging this kind of open sharing of data and code in order to accept papers for publication.

One of the general tenets of the open-source movement (that open software benefits everyone, including the developers) is doubly true of open science. I look forward to a time when the majority of research code, data, and results are made public in this way and the research community as a whole can benefit from it.

How to hack conditional branching in the PsychoPy builder

Regular readers will know that I’m a big fan of PsychoPy, which (for non-regular readers; *tsk*) is a piece of free, open-source software for designing and programming experiments, built on the Python language. I’ve been using it a lot recently, and I’m happy to report my initial ardour for it is still lambently undimmed.

The PsychoPy ‘builder’ interface (a generally brilliant, friendly, GUI front-end) does have one pretty substantial drawback though; it doesn’t support conditional branching. In programming logic, a ‘branch’ is a point in a program which causes the computer to start executing a different set of instructions. A ‘conditional branch’ is where the computer decides what to do out of two of more alternatives (i.e. which branch to follow) based on some value or ‘condition’. Essentially the program says, ‘if A is true: do X, otherwise (or ‘else’ in programming jargon) if B is true; do Y’. One common use of conditional branching in psychology experiments is to repeat trials that the subject got incorrect; for instance, one might want one’s subjects to achieve 90% correct on a block of trials before they continue to the next one, so the program would have something in it which said ‘if (correct trials > 90%); then continue to the next block, else if (correct trials < 90%); repeat the incorrect trials’.

At the bottom of the PsychoPy builder is a time-line graphic (the ‘flow panel’) which shows the parts of the experiment:

PsychoPy_builderThe experiment proceeds from left to right, and each part of the flow panel is executed in turn. The loops around parts of the flow panel indicate that the bits inside the loops are run multiple times (i.e. they’re the trial blocks). This is an extremely powerful interface, but there’s no option to ‘skip’ part of the flow diagram – everything is run in the order in which it appears from left-to-right.

This is a slight issue for programming fMRI experiments that use block-designs. In block-design experiments, typically two (or more) blocks of about 15-20 seconds are alternated. They might be a ‘rest’ block (no stimuli) alternated with a visual stimulus, or two different kinds of stimuli, say, household objects vs. faces. For a simple two-condition-alternating experiment one could just produce routines for two conditions, and throw a loop around them for as many repeats as needed. The problem arises when there are more than two block-types in your experiment, and you want to randomise them (i.e. have a sequence which goes ABCBCACAB…etc.). There’s no easy way of doing this in the builder. In an experiment lasting 10 minutes one might have 40 15-second blocks, and the only way to produce the (psuedo-random) sequence you want is with 40 separate elements in the flow panel that all executed one-by-one (with no loops). Building such a task would be very tedious, and more importantly, crashingly inelegant. Furthermore, you probably wouldn’t want to use the same sequence for every participant, so you’d have to laboriously build different versions with different sequences of the blocks. There’s a good reason for why this kind of functionality hasn’t been implemented; it would make the builder interface much more complicated and the PsychoPy developers are (rightly) concerned with keeping the builder as clean and simple as possible.

Fortunately, there’s an easy little hack which was actually suggested by Jon Peirce (and others) on the PsychoPy users forum. You can in fact get PsychoPy to ‘skip’ routines in the flow panel, by the use of loops, and a tiny bit of coding magic. I thought it was worth elaborating the solution on here somewhat, and I even created a simple little demo program which you can download and peruse/modify.

So, this is how it works. I’ve set up my flow panel like this:

flow

So, there are two blocks, each of which have their own loop, a ‘blockSelect’ routine, and a ‘blockSelectLoop’ enclosing the whole thing. The two blocks can contain any kind of (different) stimulus element; one could have pictures, and one could have sounds, for instance – I’ve just put some simple text in each one for demo purposes. The two block-level loops have no condition files associated with them, but in the ‘nReps’ field of their properties box I’ve put a variable ‘nRepsblock1′ for block1 and ‘nRepsblock2′ for block2. This tells the program how many times to go around that loop. The values of these variables are set by the blockSelect routine which contains a code element, which looks like this:

Screen Shot 2013-11-12 at 09.58.02

The full code in the ‘Begin Routine’ box above is this:

if selectBlock==1:
 nRepsblock1=1
 nRepsblock2=0
elif selectBlock==2:
 nRepsblock1=0
 nRepsblock2=1

This is a conditional branching statement which says ‘if selectBlock=1, do X, else if selectBlock=2, do Y’. The variable ‘selectBlock’ is derived from the conditions file (an excel workbook) for the blockSelectLoop, which is very simple and looks like this:

Screen Shot 2013-11-12 at 10.06.36

So, at the beginning of the experiment I define the two variables for the number of repetitions for the two blocks, then on every go around the big blockSelectLoop, the code in the blockSelect routine sets one of the number of repetitions of each of the small block-level loops to 0, and the other to 1. Setting the number of repetitions for a loop to 0 basically means ‘skip that loop’, so one is always skipped, and the other one is always executed. The blockSelectLoop sequentially executes the conditions in the excel file, so the upshot is that this program runs block1, then block2, then block1 again, then block2 again. Now all I have to do if I want to create a different sequence of blocks is to edit the column in the conditions excel file, to produce any kind of sequence I want.

Hopefully it should be clear how to extend this very simple example to use three (or more) block/trial types. I’ve actually used this technique to program a rapid event-related experiment based on this paper, that includes about 10 different trial types, randomly presented, and it works well. I also hope that this little program is a good example of what can be achieved by using code-snippets in the builder interface; this is a tremendously powerful feature, and really extends the capabilities of the builder well beyond what’s achievable through the GUI. It’s a really good halfway step between relying completely on the builder GUI and the scaryness of working with ‘raw’ python code in the coder interface too.

If you want to download this code, run it yourself and poke it with a stick a little bit, I’ve made it available to download as a zip file with everything you need here. Annoyingly, WordPress doesn’t allow the upload of zip files to its blogs, so I had to change the file extension to .pdf; just download (right-click the link and ‘Save link as…’) and then rename the .pdf bit to .zip and it should work fine. Of course, you’ll also need to have PsychoPy installed as well. Your mileage may vary, any disasters that occur as a result of you using this program are your own fault, etc. etc. blah blah.

Happy coding! TTFN.

High-tech version of the hollow-face illusion

I just accidentally made a kind-of version of the classic hollow-face illusion, using an anatomical MRI scan of my own head and Osirix. I exported a movie in Osirix, saved the movie as an image sequence using Quicktime, and then assembled it into an animated GIF using GIMP.

MW_3D_MIP_2 001_small

Click the image for a much bigger (11mb!) version.

This is a maximum intensity projection, and because of the way MIPs work, it appears that the head is rotating 180 degrees to the left, and then the direction switches and it rotates back 180 degrees to the right. In actual fact, the image is rotating constantly in one direction, as can be seen by looking at the cube on the top left which cycles through L (Left), P (Posterior), R (Right), and A (Anterior). It’s not really a hollow-face illusion as the effect is pretty much an artefact of the MIP, but still, I thought it was cool.

Some Python resources

videosong

Since I’ve switched to using PsychoPy for programming my behavioural and fMRI experiments (and if you spend time coding experiments, I strongly suggest you check it out too, it’s brilliant) I’ve been slowly getting up to speed with the Python programming language and syntax. Even though the PsychoPy GUI ‘Builder’ interface is very powerful and user-friendly, one inevitably needs to start learning to use a bit of code in order to get the best out of the system.

Before, when people occasionally asked me questions like “What programming language should I learn?” I used to give a somewhat vague answer, and say that it depended largely on what exactly they wanted to achieve. Nowadays, I’m happy to recommend that people learn Python, for practically any purpose. There are an incredible number of libraries available that enable you to do almost anything with it, and it’s flexible and powerful enough to fit a wide variety of use cases. Many people are now using it as a free alternative to Matlab, and even using it for ‘standard’ statistical analyses too. The syntax is incredibly straight-forward and sensible; even if an individual then goes on to use a different language, I think Python is a great place to start with programming for a novice. Python seems to have been rapidly adopted by scientists, and there are some terrific resources out there for learning Python in general, and its scientific applications in particular.

For those getting started there are a number of good introductory resources. This ‘Crash Course in Python for Scientists’ is a great and fairly brief introduction which starts from first principles and doesn’t assume any prior knowledge. ‘A Non-Programmers Tutorial for Python 2.6′ is similarly introductory, but covers a bit more material. ‘Learn Python the Hard Way’ is also a well-regarded introductory course which is free to view online, but has a paid option ($29.95) which gives you access to additional PDFs and video material. The ‘official’ Python documentation is also pretty useful, and very comprehensive, and starts off at a basic level. Yet another good option is Google’s Python Classes.

For those who prefer a more interactive experience, CodeAcademy has a fantastic set of interactive tutorials which guide you through from the complete beginning, up to fairly advanced topics. PythonMonk and TryPython.org also have similar systems, and all three are completely free to access – well worth checking out.

For Neuroimagers, there are some interesting Python tools out there, or currently under development. The NIPY (Neuroimaging in Python) community site is well worth a browse. Most interestingly (to me, anyway) is the nipype package, which is a tool that provides a standard interface and workflow for several fMRI analysis packages (FSL, SPM and FreeSurfer) and facilitates interaction between them – very cool. fMRI people might also be very interested in the PyMVPA project which has implemented various Multivariate Pattern Analysis algorithms.

People who want to do some 3D programming for game-like interfaces or experimental tasks will also want to check out VPython (“3D Programming for ordinary mortals”!).

Finally, those readers who are invested in the Apple ecosystem and own an iPhone/iPad will definitely want to check out Pythonista - a full featured development environment for iOS, with a lot of cool features, including exporting directly to XCode (thanks to @aechase for pointing this one out on Twitter). There looks to be a similar app called QPython for Android, though it’s probably not as full-featured; if you’re an Android user, you’re probably fairly used to dealing with that kind of disappointment though. ;o)

Anything I’ve missed? Let me know in the comments and I’ll update the post.

Toodles.

Some notes on the use of voice keys in reaction time experiments

prod_MLE1312web
Cedrus SV-1 voice key device

Somebody asked me about using a voice key device the other day, and I realised it’s not something I’d ever addressed on here. A voice key is often used in experiments where you need to obtain a vocal response time, for instance in a vocal Stroop experiment, or a picture-naming task.

There are broadly two ways of doing this. The first is easy, but expensive, and not very good. The second is time-consuming, but cheap and very reliable.

The first method involves using a bit of dedicated hardware, essentially a microphone pre-amp, which detects the onset of a vocal response, and sends out a signal when it  occurs. The Cedrus SV-1 device pictured above is a good example. This is easy, because you have all your vocal reaction times logged for you, but not totally reliable because you have to pre-set a loudness threshold for the box, and it might miss some responses, if the person just talks quietly, or there’s some unexpected background noise. It should be relatively simple to get whatever stimulus software you’re running to recognise the input from the device and log it as a response.

The other way is very simple to set up, in that you just plug a microphone into the sound card of your stimulus computer and record the vocal responses on each trial as .wav files. Stimulus software like PsychoPy can do this very easily. The downside to this is that you then have to take those sound files and examine them in some way in order to get the reaction time data out – this could mean literally examining the waveforms for each trial in a sound editor (such as Audacity), putting markers on the start of the speech manually, and calculating vocal RTs relative to the start of the file/trial. This is very reliable and precise, but obviously reasonably time-consuming. Manually putting markers on sound files is still the ‘gold standard’ for voice-onset reaction times. Ideally, you should get someone else to do this for you, so they’ll be ‘blind’ to which trials are which, and unbiased in calculating the reaction times. You can also possibly automate the process using a bit of software called SayWhen (paper here).

Example of a speech waveform, viewed in Audacity

Example of a speech waveform, viewed in Audacity

Which method is best depends largely on the number of trials you have in your experiment. The second method is definitely superior (and cheaper, easier to set up) but if you have eleventy-billion trials in your experiment, manually examining them all post hoc may not be very practical, and a more automatic solution might be worthwhile. If you were really clever you could try and do both at once – have two computers set up, the first running the stimulus program, and the second recording the voice responses, but also running a bit of code that signals the first computer when it detects a voice onset. Might be tricky to set up and get working, but once it was, you’d have all your RTs logged automatically on the first computer, plus the .wav files recorded on the second for post hoc analysis/data-cleaning/error-checking etc. if necessary.

Happy vocalising!

***UPDATE***

Two researchers have pointed out in the comments, that a system for automatically generating response times from sound-files already exists, called CheckVocal. It seems to be designed to work with the DMDX experimental programming system (free software that uses Microsoft’s DirectX system to present stimuli). Not sure if it’ll work with other systems or not, but worth looking at… Have also added the information to my Links page.

Back to school special

Funny-Back-to-School-Sign

Unimatrix-0 High School has really excellent attendance and discipline statistics

So, another academic year is about to hove into view over the horizon, and what better time to take stock of your situation, make sure your gear is fit for purpose, and think about levelling-up your geek skills to cope with the rigours of the next year of academic life. If you need any hardware, Engadget’s Back to School review guides are a great place to start, and have reviews of all kinds of things from smartphones to gaming systems, all arranged helpfully in several price categories.

If you really want to be ahead of the game this year though, you’ll need to put in a bit of extra time and effort, and learn some new skills. Here are my recommendations for what computing skills psychology students should be learning, for each year of a standard UK BSc in Psychology.*

If you’re starting your 1st year…

A big part of the first year is about learning basic skills like academic writing, synthesising information, referencing etc. Take a look at my computer skills checklist for psychology students and see how you measure up. Then, the first thing you need to do, on day one, is start using a reference manager. This is an application that will help you organise journal articles and other important sources for your whole degree, and will even do your essay referencing for you. I like Mendeley, but Zotero is really good as well. Both are totally free. Download one of them right now. This is honestly the best bit of advice I can possibly give to any student. Do it. I just can’t emphasise this enough. Really. OK. Moving on.

Next you need to register for a Google account, if you don’t have one already. Here’s why. Then use your new Google username to sign up for Feedly and start following some psychology and neuroscience blogs. Here and here are some good lists to get you started. If you’re a real social-media fiend, sign up for Twitter and start following some of these people.

You may want to use the 5GB of free storage you get with Google Drive as a cloud back-up space for important documents, or you may want to sign up for a Dropbox account as well. Use one or the other, or preferably both, because none of your data is safe. Ever.

You’ll want to start getting to know how to use online literature databases. Google Scholar is a good start, but you’ll also need to get familiar with PubMed, PsycInfo and Web of Knowledge too.

If you’re really keen and want to learn some general skills that will likely help you out in the future, learn how to create a website with WordPress or Github Pages.  Or maybe download Gimp and get busy with learning some picture editing.

If you’re starting your 2nd year…

This is when things get more serious and you probably can’t expect to turn up to tutorials with an epically massive hangover and still understand everything that’s going on. Similarly, you need to step it up a level with the geekery as well.

You probably learned some SPSS in your statistics course in the first year. That’s fine, but you probably don’t have a licence that allows you to play with it on your own computer. PSPP is the answer – it’s a free application that’s made to look and work just like SPSS – it even runs SPSS syntax code. Awesomes. Speaking of which, if you’re not using the syntax capabilities of SPSS and doing it all through the GUI, you’re doing it wrong. 

If you really want to impress, you’ll start using R for your lab reports. The seriously hardcore will just use the base R package, but don’t feel bad if you want to use R-Commander or Deducer to make life a bit easier. Start with the tutorials here.

If you’re starting your 3rd year…

This is the year when you’ll probably have to do either a dissertation, a research project, or maybe both. If you’re not using a reference manager already, trying to do a dissertation without one is utter lunacy – start now.

For your research project, try and do as much of it as you can yourself. If you’re doing some kind of survey project, think about doing it online using Google Forms, or LimeSurvey. If you’re doing a computer-based task, then try and program it yourself using PsychoPy. Nothing will impress your project supervisor more than if you volunteer to do the task/survey set-up yourself. Then of course you can analyse the data using the mad statz skillz you learned in your second year. Make some pretty looking figures for your final report using  the free, open-source Veusz.

Learning this stuff might all sound like a lot to ask when you also have essays to write, tutorials to prepare for, and parties to attend. However, all these things are really valuable CV-boosting skills which might come to be invaluable after you graduate. If you want to continue studying at Masters or PhD level, potential supervisors will be looking for applicants with these kinds of skills, and solid computer knowledge can also help to distinguish you from all the other psychology graduates when applying for ‘normal’ jobs too. It really is the best thing you can learn, aside from your course material, naturally.

Have I missed anything important? Let me know in the comments!

Good luck!

* I realise US colleges and other countries have a different structure, but I think these recommendations will still broadly apply.

MakeHuman – free, open-source software for 3D-modelling of humans

It’s always exciting when you find a new piece of cool software to play with, and even more so when what you’ve found is totally free, open-source, and available on all platforms. So it is with MakeHuman - an utterly awesome bit of kit. I wrote a piece before about FaceGen, which is also pretty cool, but MakeHuman takes it to the next level, by modelling all kinds of body characteristics as well as faces, and, of course, doing it all for free.

I’ve just downloaded it and played with it for a few minutes, but I’m already impressed by the range of options available. Through a very simple slider and radio-button based interface you have very fine control over all kinds of variables, including gender, weight, age, height, and many more, with endless fine tweak-ability possible of body and face if you dig through the options. There are also basic libraries of clothes and poses included. Here’s a well, a human, I made in just a couple of minutes:

Screen Shot 2013-08-08 at 09.45.18

And here’s a close-up of the face, after I added some hair and gave him a nasty expression:

Screen Shot 2013-08-08 at 09.47.05

Pretty cool indeed. This could potentially be a massively useful tool for people interested in face/body perception – using this, one could generate a large number of highly-controlled experimental stimuli that just differ in one aspect (say, weight, or race… whatever) very easily and quickly. Download it and have a play around!

Dr MacLove, or How I Learned to Stop Worrying and Love Apple

tumblr_lvuw7pdsFg1r2r3c4o1_500Back in the 90s it was easy; if you were a graphic designer, or some kind of proto-hipster with a trust fund you used a Mac. Everyone else used a PC. Then in the 2000s Apple started making iThings, everyone started going absolutely batshit crazy over them, and suddenly Macs were everywhere as well.

I’ve used both in parallel since about 2003 – I started off with a G5 power mac as a desktop complemented by a Windows laptop, but that’s now reversed with a Windows 7 PC on my desk at work, and a MacBook Air. This shift was significant – the desktop is what’s provided to me by my job, the laptop is my personal computer; what I choose to buy for myself.  Despite using OS X since 2003 I only really started liking it when I got my first Apple laptop – a 2009 MacBook Pro. This was also around the time that I got an iPhone 3G, which seemed like some incredible advanced artefact from the future compared to the chunky ‘smartphone’ I was using before that ran Windows Mobile 6.5; an unbelievably awful OS which I could never get to work as it should. I’ve since swapped the Pro for a 2012 MacBook air, bought an iPad mini, and am on my third iPhone, so my conversion is pretty much complete. I’ve looked at Android ‘phones and tablets, honestly, I have. Some of them are very nice, but the OS just always seems too…  busy. Maybe it’s my age, but I just want something I can pick up and use without a massive learning curve. I’m happy to stand up and say I’m an Apple-guy, and it took a while, but I’m finally actually OK with that.

It took a while, but I’ve now found Mac versions or fairly close equivalents for all the software I used on my PC. At first I sometimes used to boot into Windows using bootcamp to use a couple of applications, but I deleted the partition a while ago – I just wasn’t using it anymore. I probably won’t be spending money on any Windows machines for the foreseeable future. I know that Mac vs. Windows is one of the most hackneyed, pointless and bitter debates on the entire internet, but I just couldn’t resist setting my own bit of troll-bait out. Here, then, are the major reasons I became a Mac convert – your mileage may vary, personal opinions only, blah blah.

The MacBook Air
The Air is the machine that kicked off the ultrabook trend and, to my mind, PC manufacturers have still yet to equal the Air’s amazing combination of power, usability and portability. My 2012 model is greased-lightning-off-a-shovel fast – it chews through a set of fMRI pre-processing twice as fast as my old MB Pro, and that was no slouch either. The 2013 models are even faster, with better graphics and a frankly ridiculous 12-hour battery life. If you can live with a relatively small (128/256Gb) amount of storage, it’s a peach of a machine. Plus, I can carry it around all day and barely even notice the weight. For my money, the Air is the best value computer out there – I don’t think the step-up in performance you get with the Pro is worth the price, personally.

The Apple Trackpad
Using the trackpad on a Windows laptop feels like going back to the stone age after you’re used to the fantastic set of multi-touch gestures on an Apple laptop. Have never found one on a PC that even comes close.

Migration Assistant
Remember the excitement of getting a new computer and then the agony of re-installing all your applications, and tweaking the system to get it the way you like it? That pain doesn’t exist for me anymore. Apple’s Migration Assistant lets you do a time-machine back-up of your old computer on to an external  drive, plug that into the new one and everything is reproduced; your applications, desktop, OS settings, bookmarks, everything. It’s awesome.

Expose/spaces
OS X’s system of virtual desktops is brilliant, and essential for me, now that I’ve got used to it; flipping between desktops with ctrl+left/right arrow keys is fast and smooth, and means you can really extend the limits of what can be done on a 13″ laptop screen. I have no idea why Windows doesn’t implement virtual desktops.

Unix
In the last couple of years I’ve switched to using FSL as my main fMRI-analysis platform. FSL is developed on Macs, runs well on other Unix systems, but needs some kind of unix-emulation to run on Windows. Urgh – forget it. I do like being able to open up a terminal and institute little tweaks to the OS and applications as well. Of course Matlab/SPM and BrainVoyager also run beautifully on OS X.

Installing/Uninstalling
To install an application on OS X you drag it to a folder. To uninstall it you drag it to the recycle bin. That’s it.

Mac-only software
Osirix is without any shadow of a doubt, the best free DICOM image viewer available, and it’s Mac-only. Other things like Automator I’d really miss too, plus of course Apple’s super-fast and comprehensive spotlight search is awesome.

No crapware
You know all that shit you have to uninstall as soon as you get a new PC? Free trials of anti-virus software, media players, desktop icons that link to shitty Yahoo services you have no intention of ever using? Doesn’t exist in OS X.

 

Having said all that, of course there are annoying things that drive me crazy about OS X too. No system is perfect after all…

No Cut/Paste
You can copy and paste files between two file locations, but you can’t CUT and then paste. Seriously Apple, is this really a problem?

Annoying behaviour of the green button
The green button at the top of the window that I still think of as the ‘maximise’ button – it’s annoying. It seems to re-size the window pretty much randomly. I hate it.

ITunes
For the love of all that is holy Apple, will you please do something about the benighted clusterfuck that is iTunes? It’s utterly heinous.

Feel free to disagree with me in the comments. If you think Windows 8 is the greatest OS ever devised, please say so. Personally I think it’s a botched, compromise that tries to bring touch-functionality to laptops and laptop-functionality to tablets and does neither well, but hey, that’s just my opinion. Windows is like Star Trek movies – every other one in the series is decent, which means Windows 9 should actually be pretty usable.

Anyway – flame on!

Follow

Get every new post delivered to your Inbox.

Join 195 other followers