Humans Have the Power to Decode Bias in AI
During a Q&A with filmmaker Shalini Kantayya, faculty, students, and the community examined the bias in algorithms that impact us all.
Algorithms make decisions for humans every day. Some decide who gets the COVID-19 vaccine first, while others determine what candidate gets a job or which person gets undue police scrutiny.
But these same systems have not been vetted for bias or discrimination — nor do they have standards for accuracy. A discovery made by MIT Media Lab researcher Joy Buolamwini revealed that facial recognition technology does not see dark-skinned faces accurately.
That finding inspired Coded Bias, a 90-minute documentary created by director/producer Shalini Kantayya. The film, premiered at the 2020 Sundance Film Festival to critical acclaim, explores how Buolamwini pushed for the first-ever legislation in the US to govern against bias in algorithms.

The virtual event was moderated by Northwestern Engineering’s Josiah Hester and Sarah Van Wart, and PhD students Natalie Araujo Melo and Stephanie Jones, both part of the Computer Science and Learning Sciences program.
How did it get to a point that biases occur without scrutiny in AI and machine learning (ML) technologies? It could be a matter of people from different backgrounds not talking to one another.
“Science and technology are currently in the hands of the few. But in order to solve big problems, the public needs to be engaged,” said Kantayya. “Lay people have this intense fear of feeling like an imposter when they engage in conversations about these technologies. But the reality is 10-year-olds are using them. We need a shared language and safe space where voices of dissent can be heard.”

Kantayya said the fight for ethics in AI has been led by people of color, women, religious minorities, and LGBTQ folks.
“Three Black women scientists shined a light on bias in AI that the three biggest tech companies in the world missed,” Kantayya said. “That’s the reason it’s so important people like you [Hester] are in the room--people with identities that allowed them to see something that was missing.”
Even programmers with the best intentions can create bias in their code. When Amazon built an AI system to automate the search for top talent, Kantayya said, the model effectively erased women from the hiring pool.
“It’s striking because the first programmer was Ada Lovelace. One of the first compilers was built by Grace Hopper. It’s interesting how now, again, we rely on people like Joy to save us from ourselves,” Hester said.

One conclusion: Oversight and regulation of AI and ML is needed, especially because of their scale.
“You have technology like facial recognition going from big tech companies in the US straight to law enforcement, ICE, and the FBI without an elected official in between to give oversight,” Kantayya said. “We need a vetting system like the FDA that enforces standards.”
Conversations between people from different industries — and different countries — are critical, too.
“A lot of times scientists are working in a space where they are not connected to the people who are most vulnerable to the potential harms and impact of their technologies,” Kantayya said. “The people who are most vulnerable need to have access to the science in an empowered way.”

“The larger question is how can technology be in service of a human-centered society?” Kantayya said. “By visiting different countries, I wanted to explore the ways we’re approaching data rights.”

Kantayya issued a call to action to engineers, educators, and lay people alike: Get educated about how bias can infiltrate the everyday technologies. Resources are available on the Coded Bias website, including reading materials, discussion questions, an activist toolkit, and more.
“I made Coded Bias because I believe everyday people can make a difference, and the people in Coded Bias have shown that,” Kantayya said.