Is it possible to create a foolproof lie detector? The search is on for an accurate successor to the polygraph.
J J Newberry is a legend among US law enforcement agents for one remarkable quality: he can tell whether someone is lying with 100 per cent conviction every time.
He doesn't have to think about it, instinctively picking up the signs of trickery in a person's body language or speech, or in the fleeting expressions on their face. He is, in the words of psychologist Paul Ekman, a "wizard" of deception.
Ekman is emeritus professor at the University of California in San Francisco and has been studying facial expressions and the emotions behind them for 40 years. He discovered Newberry in the mid-1980s while testing people across various professions for their skills at deception detection. Newberry is not quite a one of a kind but there are not many like him - fewer than one in 500 people have this kind of ability. Those you might expect to have it - judges, customs officials, lawyers, policemen, members of the FBI and CIA - are generally no better than average. "They could just as well flip a coin," says Ekman. The only professions in which you are more likely to find a wizard are the secret service and dispute arbitration. Newberry bucks the trend but he is in the right job: he trains police officers and federal agents in interrogation techniques.
Little wonder, given how poor we are at spotting a liar, that there's a strong demand for technologies that can do the job for us. The best known of these is the polygraph, which has been used for decades by governments, law enforcement agencies and private companies across the world. It works by monitoring physiological processes such as breathing, blood pressure, heartbeat and the electrical resistance of the skin. The idea is that during questioning, a person will be more emotionally aroused and will respond more strongly by all these measures when telling a lie than when telling the truth. Everyone's arousal rate is different, so first the interviewer must ask the suspect some simple non- challenging questions - "Is today Thursday?" - to determine their baseline level. If their responses to the critical questions are significantly stronger than their baseline level, they are likely to be lying. Or so the theory goes.
Yet, as many psychologists point out, the theory behind the polygraph is worryingly unscientific. There are many reasons why someone's blood pressure, breathing, heartbeat and perspiration rate might increase in response to a question; it is speculative to presume that guilt is the primary trigger. They might be embarrassed or angry, or fearful about being incorrectly accused. A report published in 2003 by the US National Research Council in Washington DC concluded that the evidence for the polygraph's efficacy was "scanty and scientifically weak". It pointed out that a whole range of psychological and physiological processes could affect a polygraph test, including some that the subject can consciously control. The few studies of polygraph accuracy that have been published in scientific journals suggest innocent suspects are scored as guilty in 47 per cent of cases on average.
Despite this, the device is still widely used by police interrogators in the US and by the US government to screen federal employees for security clearance. The supreme court of the state of New Mexico has agreed that polygraph evidence is generally admissible in its courts, and courts in other states often allow it when all parties agree. The UK Government is a recent convert: in 2007 it passed legislation allowing trials of compulsory polygraph tests for sex offenders to assess the likelihood of them re-offending. Stephen Fienberg, who chaired the committee that produced the NRC's 2003 report on the polygraph, says this is especially troubling since "there is no credible scientific study to validate the polygraph's use for this purpose. One can only hope that the enthusiasm for the polygraph will diminish over time as officials come to recognise the errors that they make in relying on it."
Too good to be true
Do more modern technologies fare any better? It may be too early to say, though there are plenty of contenders. The most promising is functional magnetic resonance imaging (fMRI), which monitors activity throughout the brain by tracking the amount of oxygen being used. One of the pioneers of this technique, Daniel Langleben at the University of Pennsylvania, has found that certain regions of the cerebral cortex are more active when someone is lying than when they are telling the truth. For example, he found more activity in the anterior cingulate cortex, a part of the brain linked with error-monitoring, and the pre-frontal cortex, which is associated with reasoning, implying that lying requires more cognitive effort. Langleben admits there is some way to go before fMRI could be reliably rolled out for widespread use as a lie detection technology. For one thing, there is considerable variation in brain response between people. "The technology is at a stage where it could be applied, but only with a very high level of expertise and experience, which at this time is limited to a few people around the world," he says. "The prospects are hanging in the balance between irresponsible and premature use, and deliberate research to clear all the potential confounds."
The University of Pennsylvania has licensed Langleben's technology to a California-based company called No Lie MRI, which has started marketing what it claims are "near-perfect" MRI lie detection tests to governments, corporations, lawyers and individuals - though it is not clear how many customers it has yet. It compares its MRI scans to DNA tests, suggesting a defendant could use them to validate their own statements to a court. Corporations, it says, could use the technology to test their employees, circumventing the federal law that prohibits lie-detection devices such as the polygraph that measure the autonomic nervous system. And for individuals it could offer "risk reduction in dating" and a way to solve "trust issues in interpersonal relationships".
Too good to be true, or too disconcerting to contemplate, depending on how you value your secrets? Quite possibly.
Stephen Kosslyn at Harvard University has found that different types of lies are associated with different patterns of brain activity. Spontaneous lies - where the liar has to make it up on the spot - engage different circuits in the cerebral cortex to rehearsed lies, while lies about oneself engage different circuits to lies about others. "There is no lie centre in the brain," says Kosslyn. He reckons that the nature of deception itself "is going to make it very difficult to use any method to find straightforward signatures of deception".
Presumably plenty of researchers disagree, given the range of technologies being explored to do just that. Aside from fMRI, three of them are attracting considerable interest.
One is the electroencephalogram (EEG), which uses sensors on the face and scalp to monitor a type of electrical activity in the brain called event related potential (ERP), produced in response to a stimulus such as a true-false question. The theory is that it takes the brain longer to respond when telling a lie than when telling the truth.
Another technology with lie-detection potential is the eye tracker, which picks up changes in the way someone's eyes scan a picture. Our eyes spend less time analysing a familiar scene, and because this depends on unconscious cognitive processes there is little we can do to control it.
The eye is key to another physiological-based detection technology called periorbital thermography, a type of thermal imaging designed to spot changes in blood flow in the capillaries beneath the skin - an indicator of stress or fear. In a 2002 experiment, a team led by James Levine of the Mayo Clinic in Rochester, Minnesota, used thermal imaging to monitor the response of subjects involved in a mock crime. They detected increased blood flow around the eyes in 84 per cent of the "guilty" participants.
The rush to find a device that works is being funded mainly by US government agencies such as the Department of Defense Polygraph Institute, the Defense Advanced Research Projects Agency, and the Department of Homeland Security. The latter has already begun to develop a system called Project Hostile Intent that aims to use the latest technologies to spot potential terrorists at airports, ports and borders. Yet there is considerable scepticism of its chances of success because the technologies are at such an early stage, and the theory behind many of them may be flawed. Fienberg believes the enthusiasm for alternatives to the polygraph such as thermal imaging and fMRI is misplaced. "All the studies I have examined on the topic are deeply flawed," he says.
Hank Greely, director of Stanford University's Center for Law and the Biosciences in California, also believes those developing the new technologies have far to go. None of them is yet reliable enough to be used in court, he says.
Greely is concerned that false claims over their efficacy will convince people to use them in inappropriate situations, and he has recently put forward a proposal for a new regulatory scheme for licensing the commercial use of all lie detection technologies, based on the US Food and Drug Administration's system for controlling the use of new drugs. The developers would have to put their technology through large-scale human trials "equivalent to the clinical trials of medicine" to prove it was safe and effective.
The problem with developing an effective lie-test device is obvious to anyone who has studied behavioural clues to deception. Lying is a complex, multi-dimensional activity: there is no single facial expression, behavioural tic or cognitive marker that gives it away.
The technologies being developed may be excellent at telling whether someone has an excessive physiological response to a particular question or image, but what they cannot do is determine the reason for that response. A person can be aroused by the stress of lying, or simply by the stress of being asked a leading question, or by the fear of not being believed, or by any number of other possible triggers peculiar to themselves.
The "wizards of deception" whom Paul Ekman has been studying, along with Maureen O'Sullivan at the University of San Francisco, are effective because they have an extraordinary gift for observation, and they observe a whole range of behaviours and responses when assessing someone.
"They seem to have templates of people that they use to make sense of the behavioural deviations they observe," says O'Sullivan. "So it is not a set of disembodied cues, but embedded behaviours that are consistent with each other as well as with the kind of person exhibiting them."
No technology has come close to delivering the accuracy of the wizards. Such naturally talented human lie-detectors are rare, but we can all learn some of the skills involved.
Ekman, who has helped train thousands of police investigators and others to spot the key signs, says a person's face, speech and body movements may all "leak" subtle signs of deceit. For example, fleeting micro-expressions can suggest someone's words do not reflect what they are feeling; squelched or asymmetric expressions, false smiles, pupil dilation or blanching may indicate a person is concealing information; slips of the tongue, pauses in speech or raised or faster speech may point to negative emotion or excitement. None of these by itself is proof of deceit, but taken together they can help a trained observer make a reasonably reliable judgment.
This is not good enough for many researchers working on technological alternatives. Their dream is a device that can spot deception with 100 per cent reliability, and eventually one that can do so remotely.
For others, such a scenario would be more like a nightmare. Do we want a world of complete certainty, they ask, where not only police investigators but also your employers, teachers, work colleagues or even friends could tell what's going on in your mind? What is clear is that it would be very different world to the one we live in.