Technology

APL has developed a futuristic system to help detect when someone is lying

The Mixed-Reality Social Prosthesis, based on the skills of so-called human lie detectors, magnifies microexpressions

Illustration of a pixelated person's face

Credit: jing TSOng/ThE iSPOT

In the TV drama Lie to Me, protagonist Cal Lightman has a very particular set of skills. When interviewing someone, typically a bad guy, this self-titled deception consultant can perceive and translate the smallest of involuntary reflexes on someone's face: the twitch of an eye, a tremble in the lip, a wrinkle of the nose. These "microexpressions," the character claims, can expose when a person is lying, even when he or she is trying hard to conceal it. The subtle movements also reveal a flicker of true emotion, be it scorn, happiness, shame, or contempt.

Lightman is based on clinical psychologist Paul Ekman, a leading expert on lie detection who has spent decades deciphering facial movements to identify our unspoken emotional tells. Ekman's work provided inspiration for a hardware and software–based system developed by researchers at the Johns Hopkins Applied Physics Laboratory to enhance an interviewer's ability to detect social signals and emotional displays in real time. Dubbed the Mixed-Reality Social Prosthesis, the system is not quite an emotion detector but rather an emotional highlighter that amplifies the subtle nonverbal cues of the face and eyes displayed in the course of social interaction. (The system builds on the foundational knowledge of those at the National Center for Credibility Assessment, as well as scientists like Ekman, who was consulted for its development.)

Video credit: Johns Hopkins Applied Physics Lab

A range of sensors collects minute psychophysiological signals, such as pupil size, blink rate, gaze direction, and lip curl. It also uses facial thermography to detect things like the rapid heating and cooling around the nose, indicative of someone lying. By wearing a HoloLens, Microsoft's mixed-reality technology headset, the interviewer can see the color-coded points of data overlaid on the subject's face—maybe a series of red dots to highlight a nostril flare, or slanted white lines if his eyes avert to the left.

"The system emphasizes whatever expressions occur for the user to interpret on their own, as opposed to declaring, 'This person is lying,' or, 'This person is afraid.'"
Ariel Greenberg
APL research scientist

The system's user is looking for these microexpressions, either a fleeting trace of emotion, or what's called leakage, says Ariel Greenberg, an APL research scientist and principal investigator of the project. Leakage is what can happen when you consciously attempt to suppress an emotion, he explains. In that moment, maybe only a split second, you let escape a tweak or tiny movement. A paramilitary leader, for example, may exhibit a microexpression of contempt when welcoming U.S. Special Forces, and it's incumbent upon those team members to determine its meaning and significance.

"The system emphasizes whatever expressions occur for the user to interpret on their own, as opposed to declaring, 'This person is lying,' or, 'This person is afraid,'" he says. "That's why the human in the loop is so critical. This is not a system that stands on its own, like a metal detector for lies."

The Mixed-Reality Social Prosthesis was initially developed for use by intelligence interviewers and police officers, who could use the system for detecting deception, and as a training tool to improve skills in de-escalation and conflict resolution by being more attuned to someone's emotional state. The system is also capable of screening multiple faces at a time. Someone outfitted with the device can ask a question of a crowd of people and then scan the sea of faces looking for leakage or any anomalous reaction.

There are also possible applications for health care. For example, the system could be used to identify emotions of people with flat facial affects, such as stroke victims or those with cerebral palsy. It could also potentially be used for therapeutic purposes, such as training people with autism to recognize facial cues.

Greenberg says they hope to bring in other sensor modalities, and better understand inconsistencies—where a twitch might suggest one thing and a facial thermal reading another. He cautions that any system, no matter what subtleties it brings to the fore, will still depend on educated guesswork and interpretation of the human user.

"The best liars think they are telling the truth," he says. "That is what makes them so good."