Blog Archives

Digital audio and video basics – two excellent videos

I’ve just come across two outstanding tutorial videos over on xiph.org – an open-source organisation dedicated to developing multimedia protocols and tools. So, the first one covers the fundamental principles of digital sampling for audio and video, and discusses sampling rates, bit depth and lots of other fun stuff – if you’ve ever wondered what a 16-bit, 128kbps mp3 is, this is for you.

The second one focusses on audio and gets on to some more advanced topics, about how audio behaves in the real world.

They’re both fairly long (30 mins and 23 mins respectively) but well worth watching. If you’re just getting started with digital audio and/or video editing and production, these could be really useful.

TTFN.

Advertisements

Video analysis software – Tracker

I just came across a gosh-darn drop-dead cool (and free!) piece of software that I just had to write a quick post on. It’s called Tracker, it’s cross-platform, open-source and freely available here.

In a nutshell, it’s designed for analysis of videos, and can do various things, like track the motion of an object across frames (yielding position, velocity and acceleration data) and generate dynamic RGB colour profiles. Very cool. As an example of the kinds of things it can do, see this post on Wired.com where a physicist uses it to analyse the speed of blaster bolts in Star Wars: Episode IV. Super-geeky I know, but I love it.

An example of some motion analyses conducted using Tracker

Whenever I see a piece of software like this I immediately think about what I could use it for in psychology/neuroscience. In this case, I immediately thought about using it for kinematic analysis – that is, tracking the position/velocity/acceleration of the hand as it performed movements or manipulates objects. Another great application would be for analysis of movie stimuli for use in fMRI experiments. Complex and dynamic movies could be analysed in terms of the movement (or colour) stimuli they contain and measures produced which represent movement across time. Sub-sampled versions of these measures could then be entered into a fMRI-GLM analysis as parametric regressors to examine how the visual cortex responds; with careful selection of stimuli, this could be quite a neat and interesting experiment.

Not sure I’ll ever actually need to use it in my own experiments, but it looks like a really neat piece of software which could be a good solution for somebody with a relevant problem.

TTFN.