Why androids are not taking over the world yet

The new humanoid robots named "Otonaroid" and "Kodomoroid" are pictured during a press preview at the National Museum of Emerging Science and Technology in Tokyo on June 24, 2014. Japanese scientists unveiled what they said was the world's first news-reading android, eerily lifelike and possessing a sense of humour to match her perfect language skills.

Japan is fascinated with, and is fast producing, very lifelike robots these days. Recently, a permanent exhibition called "Android: What is Human?" opened at the National Museum of Emerging Science and Innovation in Tokyo.

Among its exhibits are a female child (or "kodomo" in Japanese) android called Kodomoroid and a female adult (or "otona" in Japanese) robot called Otonaroid.

Endowed with expressive faces and complex human gestures, these androids can communicate with the viewer quite realistically.

But videos of these "a little too human yet not human enough" mannequins are more likely to creep you out than not.

When androids look somewhat human, but are obviously not, we may find them adorable. One example is C-3PO in Star Wars. But when they get too human-like and yet are not fully human, they evoke revulsion in many people.

This creepiness we feel around very lifelike androids is called "the uncanny valley", a phenomenon first described by Japanese roboticist Masahiro Mori in 1970.

The term "uncanny" comes from a 1919 Sigmund Freud essay entitled Das Unheimliche (The Uncanny) to refer to something familiar yet foreign that is strange in a way that is difficult to grasp.

The term "valley" refers to the dip in a graph that plots how comfortable we might be against the degree of an android's human likeness. A 50 per cent lifelike android is cute. A 95 per cent lifelike one may be incredible.

But any more lifelike than that, it becomes eerie, and there is a sharp dip in the graph.

Maybe its skin texture looks and even feels fully human to the touch. But its skin does not quite move as it should when the android gestures.

Mostly, however, it is the face. Maybe it is the eyes that matter the most, if the eyes are windows to the soul. Our eyes show the world that we are really living souls, not synthetic automatons.

The android's eyes may look a tad dead, despite the best emulation. And the instant they tell me that my interlocutor is a silicon-and-steel thing - so it can't really understand me - I shudder.

In its uncanny face, there is something unhealthy. Indeed, one postulate is that, with very lifelike androids, their looking and behaving just a tad short of 100 per cent human may be likened to a defect. But physical defects suggest disease, which can spook us in the way that very ill people - or corpses - evoke fear and repulsion.

Another hypothesis to explain the uncanny valley is that we may see a very lifelike android as a soulless being - the living dead, as it were - which triggers what has been called our "existential angst" or "death anxiety", whereupon our psychological defence mechanisms for coping with our mortality - mainly denial - kick in.

The uncanny valley has been studied empirically, since understanding it will be important in the long run as the use of androids will likely increase over time.

In fact, the main driver for Japan's push in this area is the need in its rapidly ageing society for assistive technologies, including wearable exoskeleton technologies, but also androids.

An exoskeleton is an external shell of soft fabric equipped with smart motors and intelligent joints that can be strapped on to or slipped over the upper limbs to enable the elderly to handle heavier objects. Exoskeletons for the lower limbs may enable them to walk or climb stairs better.

These can help the elderly with creaky joints and weak muscles. While androids may be able to assist the elderly in similar ways, they have an additional advantage: they may be able to also provide companionship especially for elderly people living on their own - if humans get over the valley.

In a 2011 study published in Social Cognitive And Affective Neuroscience, a team of University of California, San Diego scientists used scans to image the brain activity in volunteers shown videos of a very realistic android, the actual woman upon which the android was modelled and, finally, the android without its artificial skin, so it looked like just a robot.

The three were shown doing non-threatening acts like drinking water or wiping tables. The scans showed that the level of activity in the brain region which helps us understand the actions of others was much greater when looking at the actions of the android than those of the human or the robot.

The scientists deduced that it was not the android's appearance or its movements per se that mattered. What mattered was the brain's expectation that a human looked and moved like one, or a robot looked and moved like one. But the android looked human while its motions were robotic. This the brain found incongruent, triggering the uncanny valley.

That is, the brain looks for congruence. The scientists speculated that "as human-like artificial agents become more commonplace, perhaps our perceptual systems will be re-tuned to accommodate these new social partners".

Once re-tuned, the brain may no longer find an incongruence between an android's very human looks and its slightly less-than-human movements. If so, we could leap over the valley and accept androids as companions in some form. What that form might look like might also depend upon how human-like their artificial intelligence (AI) systems will be.

Realistically, within our lifetime, AI systems are unlikely to be able to autonomously tell the difference between a humanly meaningful problem and a humanly meaningless one, or between a humanly significant inference and a humanly trite one.

This is because to tell them apart requires one to be able to make value judgments, which depend on one's objectives. But human objectives depend on human values, which are inchoate and hence unprogrammable.

Indeed, it is because human values cannot be programmed into software that the AI field has sidestepped "values" and programs only straightforward goals and constraints like "avoid collisions with all objects" or "look for a power source", say.

But without human values, an android's AI system will always lack common sense and real-world understanding. If so, androids may never quite achieve real human intelligence in our lifetime.

No fears then of indestructible and superintelligent androids who are practically indistinguishable from flesh-and-blood humans taking over our world just yet.


This article was first published on July 13, 2014. Get a copy of The Straits Times or go to straitstimes.com for more stories.