Emotion Recognition in the Criminal Justice System

A young man is on trial for murder. His defense says he’s being railroaded. The prosecution paints him as a deceiving psychopath. Is he lying when he says he wasn’t there that night? We don’t have to look into his eyes to figure that out; the Emotion Recognition System does that for us. With dozens of cameras; thermal, pulse, and respiration measurements; sophisticated facial expression recognition algorithms, with billions of hours of human observation as background data; and thousands of hours of specific observation of this one defendant (from his time in police custody, from his social media accounts, subpoenaed from his home smart devices), the ERS is able to determine with 99% accuracy that he is lying. 

This is speculation now, but it will be reality soon enough. Facial recognition technology is advancing quickly. Facebook’s DeepFace and Google’s FaceNet claim to have achieved near 100% recognition rates, outperforming human counterparts at the task of identifying faces that belong to the same person. Amazon’s cloud-based Rekognition API is already being used by law enforcement, notably the Orlando Police Department and the Washington County Sheriff’s office in Oregon. There are obvious concerns about privacy and the reliability of this technology; a test of Rekognition conducted by the American Civil Liberties Union incorrectly matched 28 members of congress to a database of mugshots

And these concerns get a lot more complicated when emotion comes into play. That’s the next frontier. On August 12, Amazon put out a press release to announce Rekognition’s “improved accuracy for emotion detection.” This “sentiment analysis” is meant to detect “emotions that appear to be expressed on one’s face,” and they include “Happy,” “Sad,” “Angry,” “Surprised,” “Disgusted,” “Calm,” “Confused” and “Fear.” Despite the expected objections from poets and psychotherapists that there are surely more than eight emotions, there is an even deeper worry that Amazon rightly warns against: “the API is only making a determination of the physical appearance of a person’s face. It is not a determination of the person’s internal emotional state and should not be used in such a way. For example, a person pretending to have a sad face might not be sad emotionally.”

It’s tempting to read that last sentence and say, “duh.” But when you’ve been repeatedly told that your face reveals you to be a psychopath, it sounds like a much-needed caveat. 

From the moment I was arrested, people began analyzing my smallest gestures and facial expressions, scouring for evidence of guilt. It continues to this day. Like this bit of pseudoscience claiming to be able to tell I’m lying in a Diane Sawyer interview. Or this post diagnosing when I cry and laugh, and concluding that I’m a manipulative psychopath. There are countless videos that attempt to analyze my body language, as if that could provide any reliable indication of deception or guilt.

I’m certain that people will be running footage of my interviews through emotion recognition algorithms before you know it. And it won’t just be me. The potential criminal justice uses for technology that senses emotion are manifold. Is this parole candidate showing true remorse? Is that witness certain or confused? Ask the algorithm. 

But the holy grail in this realm is lie detection. Lying isn’t an emotion, it’s an action, but there’s good reason to believe it’s correlated with certain emotional states that manifest in measurable psychosomatic ways. 

That was the idea behind the polygraph test, which has not fundamentally changed since 1939. The test measures a subject’s blood pressure, pulse, respiration, and galvanic skin response in relation to a series of questions. But can it tell if you’re lying? 

The potential benefits of true lie detection are so great that for decades we fooled ourselves into thinking we’d found a solution in a handful of crude biometric readings. But as the test gained in popularity, it increasingly came under scrutiny. The pivotal moment came in the 1998 Supreme Court case United States v. Scheffer, which found polygraphs to be unreliable, stating that their results were “little better than could be obtained by the toss of a coin.”

Today there is wide consensus in the scientific community that the polygraph is a form of pseudoscience. This is primarily because it does not, and indeed cannot, measure lies. It measures physical arousal. It’s true that arousal can result from anxiety, and that anxiety can result from deception. But arousal can also result from hypoglycemia, alcohol withdrawal, nicotine consumption, psychosis, depression, and a host of other factors. And even if we could narrow it to anxiety alone, anxiety is no guarantee of deception. It can be caused by PTSD, fear, lack of sleep, and even worry about false positives induced by the polygraph itself. 

And yet, despite the consensus that the polygraph is unreliable, it is still routinely used by the FBI, the CIA, the NSA, and many police departments, including the LAPD, to interrogate suspects and screen employees. That fact alone makes me skeptical of how emotion recognition technology may be misused despite its known limitations. 

And indeed, new research has severely undercut the idea that we can reliably infer how people feel based on their facial expressions. A study led by Lisa Feldman Barrett at Northeastern University concludes that “emotional expressions are more variable and context-dependent than commonly assumed,” and that “tech companies may well be asking a question that is fundamentally wrong,” critiquing the very idea of trying to discern internal states from facial movements alone, without a deep reliance on context. They say these methods are “at best incomplete and at worst entirely lack validity, no matter how sophisticated the computational algorithms.”

In an interview with The Verge, Barrett notes that how we communicate anger and all other emotions, “varies substantially across cultures, situations, and even across people within a single situation.” And that’s not even taking into account those neurodiverse individuals among us, such as those on the autism spectrum, who show greater variation still in how they express emotion.

Even so, if generations of people could be convinced that blood pressure and galvanic skin response were reliable indicators of deception, it’s not hard to imagine the quick and credulous adoption of the next generation of lie detection software based on emotion recognition technology. 

I don’t want to ignore what the tech-optimists will rebut: the power of big data and of deep learning means that these emotion recognition algorithms that are today attempting to classify eight crude emotion categories based on facial expressions will, by 2030, be using a thousand metrics, comparing your expressions, movements, speech patterns, and possibly even pheromones to a massive database of human expression, and likely even comparing your current expressions against your own baseline behaviors gathered from the surveillance state we all increasingly live inside. That’s the scenario I outlined at the start of this essay. And with that level of sophistication, it will be incredibly tempting to use this technology in a criminal justice context, whether in interrogations or in the courtroom. 

I’m not saying I’m against that, even, especially if this technology utilizes specific observation against baseline behavior, thus avoiding the pitfall of generalizing human emotional responses the way the polygraph does. If we improve emotion recognition to the point that we could say with 99% confidence that a person’s physical movements and expressions correlate with their own individual profile of deception, that’s useful information. And if it had been available back in 2007 when I was in Italy, I might not have ever spent a day in prison. 

But here’s the thing: this technology still wouldn’t provide specifics about a person’s internal belief states. It may say with 99% accuracy that our defendant is lying. But lying about what? The murder or the tangential drug use brought up by the prosecution? And why? To protect himself or to protect a friend? Or because he can’t face the trauma surrounding the event? Nor would it tell us if an honest statement is actually true (only that the speaker believes it to be true, and eyewitness testimony is far from reliable). 

If and when we use emotion recognition in the courtroom, it has to be more probative than prejudicial. We have to remember that at the end of the day, all this information is still handed over to an emotionally manipulable jury, and that the sheen and authority of new technology, however unreliable it is, can easily sway the course of justice. We have to remember that honesty is not the same as innocence, that deception is not the same as guilt, and that it’s possible to pinpoint lie after lie, and still be no closer to the truth.

Previous articleEpisode 12: Amanda Knox Reads: Expressions of Guilt
Next articleTuesday September 3, 2019