Faculty Tech Talk: Debugging Bias

Mar 14 2019 | By Jesse Adams | Chang and Hirschberg Photo Credit: Jeffrey Schifman | McKeown photo courtesy of Kathy McKeown

“Garbage in, garbage out,” the saying goes: flawed information leads to flawed computation. Perhaps nowhere are the consequences of that more concerning than when artificial intelligence (AI) trained on tainted data unwittingly recreates social inequities.

At Columbia Engineering, researchers prioritize programming for brighter outcomes, from identifying opportunities to de-escalate gang violence to searching for better ways to sort job applicants and assess creditworthiness. On February 18, three of the School’s esteemed computer scientists and engineers joined Dean Mary C. Boyce and a crowd of students for a faculty tech talk on how artificial intelligence can help foster social justice.

Each of the professors brings a unique expertise to the table. Shih-Fu Chang, senior executive vice dean and a professor of electrical engineering and computer science, specializes in computer vision, machine learning, and multimedia information retrieval. Professor Julia Hirschberg focuses on computational linguistics and speech. And Professor Kathy McKeown, founding director of the Data Science Institute, explores natural language processing, generation, and summarization.

Together, they tackled fake news, shifting context, and population bias. We’ve excerpted a few edited highlights of the conversation below.

Shih-Fu Chang

Julia Hirschberg

Kathy McKeown

Q: There’s many different ways that engineering confronts these challenges. From a computer science perspective, how are each of you approaching social justice issues?

Hirschberg: Back in 2012, a student and I tried to work on the identification of hate speech, but it’s a very difficult topic because when one person says a particular thing to another person it may be viewed as offensive, but if it’s among friends it might not be. So it’s difficult to identify in social media. We did have a certain amount of success at the time, and it’s something that today more companies are really interested in. I’ve also started investigating what can be done about the less good things that AI does. One thing that has been really concerning to me is how deep learning systems that are all the rage today perpetuate the biases of the data they’re trained on—this is true in machine translation, job search, and face recognition. AI software is currently being used to make very serious decisions like whether you can get a loan or not, job candidate selection, parole determination, criminal punishment, and educator performance, often without user awareness of the limitations.

McKeown: One project that my group is currently working on—which happens to be my favorite right now because it’s challenging, interdisciplinary, and actually all of the work has been done by undergraduates—is with Desmond Patton at the Columbia School of Social Work. Four years ago, he came to me and said he was looking for some people in natural language to join in a project of his. I knew I was really supposed to connect him with others but after hearing a little bit I said, “I want to work on that project with you.” He was studying gang-involved youth in Chicago, and one of the things he’d noticed is that they were often posting on social media and these posts could exacerbate and lead to real world violence. We began our work by focusing in on aggression and loss, because he had found that youths who experienced a lot of trauma often posted first on social media about that loss and subsequently that could turn into posts about aggression. Patton’s group found that context was critical, but only had a relatively small labeled dataset. So the challenge was developing a system that could automatically detect these posts without using a standard processing tool—because those tools had been trained on very different language in the New York Times and the Wall Street Journal. Another challenge was adding features that captured information from context, in particular the emotional content of what people said before a particular tweet. We got close to 70% accuracy.

Chang: I work on computer vision and multimedia, and Professor Patton came to talk to me about his work with Professor McKeown on understanding communications potentially involved in gang violence. I didn’t think images and multimedia would help much, but he showed me data that they are very important to these communications, both emojis and the very specialized kinds of images people use in these communities. When you work on computer vision and machine learning, one of the big challenges is how do you make a robust recognition model quickly adapt to new data without too much of a training process? The topic was so compelling that a lot of graduate and undergraduate students from different schools have come together to work on the project of understanding information in social media, and identifying the data that it is important to understand. Another project we’ve been working on, supported by DARPA, is trying to understand the information network of the dark web as well as who’s involved and who are the potential victims of human trafficking. Given this huge amount of data, how do you help law enforcement and non-profit agencies quickly access information in this huge complex space?

We're trying to understand the information network of the dark web. Given this huge amount of data, how do you help law enforcement and non-profit agencies quickly access information in this huge, complex space?

Shih-Fu Chang
Senior executive vice dean and professor of electrical engineering and computer science

Q: How is AI being used to detect the profusion of fake news?

Hirschberg: We’re actually working on this with people at the Journalism School, trying to identify trusted news, or, what kind of news people trust and what kind they don’t trust.  We have to do this by different demographic and political proclivities. It’s really quite fascinating that you can tell there are certain kinds of words people use to get trusted by different political groups. They might not know they’re doing it, but people write for different in-groups.

Q: As engineers, we often take algorithms and techniques that may have been developed for a specific environment and translate them to a very different field. How does this new context feedback into your work?

McKeown: There are different meanings to the word context. One has to do with moving between domains—as with personal narratives on social media, where the language is much more informal than what you see in, say, the Wall Street Journal. Another meaning is, if I’m looking at a single tweet, the context would be everything that was said up until that point in time. And it could be everything that was said by the person who posted that tweet, or it could be everything that was said by friends of the person who posted that tweet. But yet another kind of context is where we have words that refer to events, such as the Boston marathon bombing. That refers to a particular event in the real world, and we read it, and we know what it refers to. With these kinds of messages, words that refer to real world events are often not words in our regular vocabulary. How can we figure that out computationally? At this point in time I don’t know. And when I don’t know how we’re going to do a problem and nobody has done that problem before, for me that is what is fun.

Q: Can you talk a little about the impact of population bias, which can play a large role in determining the accuracy of a model’s performance?

Chang: If you just maximize utility or the commercial value of a system, maybe for 95% of the data it will get a good result, but we need to serve different subgroups of the population equally and fairly. That’s where the justice concept is really key—we should not just maximize the utility or capital of a system, we should really look at different things that can be improved and made more fair.

Tagged in
Algorithms

Stay up-to-date with the Columbia Engineering newsletter

* indicates required