Julia Hirschberg: Lies and Linguistics—Machine Learning Gets Closer to the Truth

Julia Hirschberg in the Speech Lab, where her team performs laboratory studies on human speech production, analyzes speech, and builds speech technologies. Speech data is recorded in this double-walled soundproof booth. (Photo by Jeffrey Schifman)

There’s an art and a science to spotting deception. Until now, body language, writing style, and biometric measures (as measured by polygraph machines) have been relied upon to indicate whether a person is telling the truth. But these cues have not been shown to be reliable cues to deception.

A key to better detecting the truth could reside in a person’s verbal cues, according to Julia Hirschberg, Percy K. and Vida L.W. Hudson Professor and Chair of Computer Science. She bases her research on what science already knows about why deception is detectable: An increase in cognitive load coupled with the fear of detection can lead to behavioral changes. That means that, when lying, one may change one’s normal behavior—raising or lowering one’s pitch, looking directly at a conversational partner or not, speaking louder or softer. Such differences vary by individual, and heretofore it has been impossible to predict with real accuracy how a given person will act when lying. “Our first work was in American English, where we discovered what practitioners believed anecdotally—that there are significant individual differences in deceptive behavior within a single culture. We hypothesized that there must be a way to figure out the reason for these differences,” she explains.

An example of the waveform of an utterance from the corpus, along with its spectrogram—a visual representation of the power at each frequency—and its phonetic and orthographic transcriptions. The final row shows the truth value of the utterance, as reported by its speaker. (Images courtesy of Sarah Ita Levitan)

Hirschberg and her colleagues at the University of Colorado and SRI International had already shown that scores on a simple, standard personality test correlated with human judges’ ability to detect deception. Raters who scored higher on a standard personality test in such traits as openness to experience and agreeableness were significantly better at distinguishing truth from lie. They wondered if the same scores could predict individual differences in individual behavior when lying as well. “We decided to see whether we could use personality scores to help predict variation in acoustic and prosodic features of deceptive versus nondeceptive speech,” she says.

This work has already led to the development of machine learning classifiers to make it possible for machines to recognize deceptive speech with reasonable accuracy. “Those algorithms, which we developed for the Department of Homeland Security and the National Science Foundation, were quite successful, enabling us to identify deceptive speech with 70 percent accuracy,” she states. Since human accuracy is below chance (in laboratory experiments, only criminals perform well at this task) this classifier performance was impressive.

Now, Hirschberg is not just helping to further the possibility of developing lie-detection technology that is more accurate than human intuition or the polygraph. She is also discovering cross-cultural differences and similarities in how people deceive. She currently studies how native speakers of English and Chinese compare in deceptive behavior, funded by a grant from the Air Force Office of Scientific Research to study deception in speech across cultures.

Her earlier research resulted in the largest collection of cleanly recorded deceptive/nondeceptive American English speech, which Hirschberg has made available for study by the research community. The corpus she is collecting now is much larger and has produced some interesting results so far. “We are now finding that the ability to detect deception is correlated significantly with the ability to deceive. This makes sense, but no one had demonstrated that before,” she says.

Research + Discovery

Elisa Konofagou
Headway in Harmonic Health Care
Shih-Fu Chang
Finding a Visual Needle in the Digital Haystack

While Hirschberg and her colleagues are still analyzing the new data, she feels confident that the new study will provide new insights into how people from different cultural backgrounds deceive and detect deception. In the future, machines that can aid humans in identifying deception could be practically applied in many areas of society, from enhancing law and security and more fairly delivering justice and appropriate social services, to enhancing employee relations and increasing credibility in politics. While Hirschberg and her collaborators are getting closer to that reality, she does not discount the human factor.

“Tools created through artificial intelligence along with human perception could be the most powerful way to get closer to the truth,” she says.