I want to be a Robopsychologist
I was a bit of a weird kid. While my school-mates were running around in the playground playing football and mindlessly inflicting minor injuries on each other I used to sit on a bench reading 1960s science fiction books.* One of my favourite authors at the time was Isaac Asimov, and I was particularly captivated by his Robot stories, and the character of Susan Calvin, with her neologistic job description of ‘robopsychologist’. What Asimov astutely recognised back in the 60s was that if artificial intelligence of even a relatively simple and tightly-bounded nature was ever to be created, its behaviour would be complex, often unpredictable, and even occasionally aberrant. Susan Calvin’s job description is therefore to interpret and understand robot behaviour, and to manage and study their interactions with humans.
50 years on from the publication of Asimov’s initial robot books (and coincidentally, roughly in the time period in which Asimov set his stories), robopsychology might just be beginning to emerge as a discipline. While the robots we are currently developing are clunky simpletons compared to Asimov’s capable, graceful, positronic-brained creations, the behaviour they exhibit is arguably starting to be at a level where some serious questions need to be asked and investigated. A Google Scholar search for the term ‘robopsychology‘ turns up 22 results, most of which are spurious, but three in particular stand out. The first two are reviews by Alexander and Elena Libin, published in 2004 (PDF) and 2005 (PDF), and deal with similar themes. These authors seek to establish a set of principles by which person-robot interaction might be studied, and also present some findings derived from their use of a robotic cat with various cross-cultural and clinical populations (incidentially defining the term ‘robotherapy’ for apparently the first time). The third source found by Google Scholar is an unpublished MSc thesis from Diego J. Mejias‐Sanabria at UCL (PDF) which details some theoretical and experimental studies of the impact of different physical features on human-robot interaction, and particularly on the strength and type of relationship that is produced.
While clearly of great interest, this preliminary work is focussed almost entirely on the human side of the equation, with little to be said about the psychology of the robot itself. This is understandable, as most robots now commonly in use, while sophisticated in many ways, are capable of only a relatively simple set of pre-programmed behaviours. As such, there is relatively little to investigate from a psychological perspective. However, all that may be about to change. The inspiration for this blog post actually came from some stunning work performed by a company named TheCorpora, using the open-source, linux-distro-powered Qbo robot. In the video below, Qbo learns to recognise itself in a mirror:
And in this second video, Qbo learns to distinguish a view of itself in a mirror from another Qbo robot, using a flashing light pattern on its nose. Once the other robot is recognised as a different entity, a short conversation between the two is carried out:
Needless to say, this is some seriously impressive stuff. Astute students of behaviour will recognise the setup in the videos as an example of the Mirror Test, first devised by Gordon Gallup in the 1970s, and used as one way of guaging the level of various animal species’ self-awareness. The nose-flashes that Qbo uses to recognise itself are even equivalent to the methods used in an elaboration of the mirror test where odourless dye-spots are painted on animals in order to get a clearer behavioural indication of whether they respond to the mirror as an image of themselves.
This work done with the Qbo robot clearly raises a whole host of questions, principal among them being (to my mind at least), just what exactly is happening here? The exact interpretation of success or failure by a particular species/individual at the mirror test is a matter of open debate, and the arguments about what the results might mean quickly spin off into the realms of unfettered philosophising. Some have argued that the mirror test is unsuitable for assessing species which use odour or auditory cues more extensively than humans, if this is the case, how suitable is it for assessing the self-awareness of a robot? It could be that Qbo is merely performing a relatively mechanical set of pre-programmed responses when confronted with the mirror, however it could just be possible that something a lot deeper, and a lot more interesting, is going on.
The point I’m hoping to demonstrate here is that an understanding of what’s going on in these videos may require an understanding not just of the mechanics, programming and behaviour of Qbo, but fundamentally of its psychology. We need to know, essentially, what’s going on in its head when it recognises itself in the mirror. Such an understanding would doubtless be hugely valuable in driving further research and development of artificial intelligence, but could conceivably shed light on the development of consciousness and self-reflective abilities in humans and other species. For this, we obviously need robopsychologists, and with the pace of development of robot abilities and their increasing penetration into society it’s not unlikely that such needs may become pressing within the next 10-20 years. The field is currently so nascent as to be practically zygotic, and it may be some time before it produces the real equivalent of Susan Calvin, however, I have little doubt that given time, it will.
And that is why I want to be a robopsychologist. I leave you with a quote from the Master himself:
“Individual science fiction stories may seem as trivial as ever to the blinder critics and philosophers of today — but the core of science fiction, its essence, the concept around which it revolves, has become crucial to our salvation if we are to be saved at all.”
My Own View” in The Encyclopedia of Science Fiction (1978) edited by Robert Holdstock; later published in Asimov on Science Fiction (1981).
* The word ‘nerd’ hadn’t been invented yet, or at least wasn’t in common usage where I grew up, so I was a ‘boffin’ to my contemporaries, the heartless little cretins.
Posted on January 2, 2012, in Commentary, Cool new tech, Hardware, Software and tagged Asimov, behaviour, consciousness, psychology, Qbo, robopsychology, robot, robots, self-awareness, Software, TheCorpora. Bookmark the permalink. 3 Comments.