Human brain keeps it real when faced with androids

Image credit: Reuters

According to research by scientists from the University of California, Berkeley, the human brain can determine in less than one second whether it is faced with a humanoid android or a real human being, despite much-vaunted advances in android presentation.

In the wake of a range of successful sci-fi films and TV shows featuring humanoid androids, such as Ex Machina, Humans and Westworld, the differences between man and machine have been pitched as being almost imperceptible.

However, the University of California’s real-world research suggests that the human brain takes less than one second to differentiate between reality and fantasy.

The findings, published in the November issue of the journal Nature Communications, show that the human brain is visually wired to absorb and process information very quickly and make a snap judgment between the real and the fake.

The scientists discovered a visual mechanism they call "ensemble lifelikeness perception," which determines how we perceive groups of objects and people in real and virtual or artificial worlds.

"This unique visual mechanism allows us to perceive what's really alive and what's simulated in just 250 milliseconds," said Allison Yamanashi Leib, the study’s lead author and a postdoctoral scholar in psychology at UC Berkeley.

"It also guides us to determine the overall level of activity in a scene."

Vision scientists have long assumed that humans need to carefully consider multiple details before they can judge if a person or object is lifelike.

"Our study shows that participants made animacy decisions without conscious deliberation and that they agreed on what was lifelike and what was not," said David Whitney, a UC Berkeley psychology professor and senior author on the study.

"It is surprising that, even without talking about it or deliberating about it together, we immediately share in our impressions of lifelikeness."

Using ensemble perception, study participants could also make snap judgments about the liveliness of groups of objects or people or entire scenes, without focusing on all the individual details, Whitney said.

"In real life, tourists, shoppers and partiers all use visual cues processed through ensemble perception to gauge where the action is at," Leib said.

The study also suggests that if we did not possess the ability to rapidly determine lifelikeness, our world would become very confusing, with every person, animal or object we see appearing to be equally alive, Whitney said.

For the study, researchers conducted 12 separate experiments on a total of 68 healthy adults with normal vision. In the majority of trials, participants viewed up to a dozen images of random people, animals and objects including an ice cream sundae, a guinea pig wearing a shirt, a hockey player, a statue of a woolly mammoth, a toy car carrying toy passengers, a caterpillar and more.

Participants quickly viewed groups of images, then rated them on a scale of 1 to 10 according to their average lifelikeness. Participants accurately assessed the average lifelikeness of the groups, even those displayed for less than 250 milliseconds.

In another experiment to test participants' memory for details, researchers flashed images, then showed them ones that participants had seen as well as ones they had not. The results indicated that while participants had forgotten a lot of details, their "ensemble perception" of what had been lifelike remained sharp.

"This suggests that the visual system favours abstract global impressions such as lifelikeness at the expense of the fine details," Whitney said. "We perceive the forest and how alive it is, but not the trees."

The research findings come at a time when the deployment of artificial intelligence is becoming increasingly prevalent in our everyday lives.

Recent articles

Info Message

Our sites use cookies to support some functionality, and to collect anonymous user data.

Learn more about IET cookies and how to control them

Close