Scholars address solutions to AI biases in film screening and panel discussion

By Hannah Ziegler

The University of Maryland College of Computer, Mathematical and Natural Sciences coordinated an expert panel to discuss AI biases and discuss a film on the topic via Zoom Friday, March 26. 

The panel consisted of experts across the data science, engineering and sociology fields to provide a holistic view of AI dangers. Panelists used the 2020 documentary “Coded Bias” to guide their discussion, which more than 500 students watched in advance of the panel through a CMNS partnership with SEE. 

“Coded Bias” offers an exploration of the fallout from MIT Media Lab researcher Joy Buolamwini’s discovery of racial bias in facial recognition algorithms. It follows Buolamwini as she and colleagues discover flaws in AI that affect others’ right to privacy, freedom of expression and freedom from discrimination. 

Panelist Deborah Raji, a fellow at Mozilla who was featured in the film as Buolamwini’s mentee, said that the conversation started by the film is more important now than ever. 

The majority of the discussion focused on the need for accountability and openness among AI developers to combat human rights violations.

“Individuals who are harmed by algorithms are starting to get justice through civil suits. This paves a path toward policymaking routes,” Raji said.

With the presence of black boxes in AI, it is increasingly hard to find transparent algorithmic decision-making, said Margrét Bjarnadóttir, an associate professor of management science in the Smith School of Business. Black box AI is any artificial intelligence system whose inputs and operations are not visible to the user or another interested party.

Because of black box technology, many panelists said that responsibility inherently falls on the developers of flawed algorithms to stifle the dangers of discrimination.

“If a human doesn’t hire you because you’re black, and you have a way to prove this, then you can prove someone intended to discriminate,” said Nicol Turner Lee, a university sociology lecturer who serves as Brookings Institute Senior Fellow in governance studies and director of the Center for Technology Innovation. “But machines don’t have intent. They only know what they are taught. So where does that discrimination come from?”

Lee added that there needs to be more transparency about how algorithms perpetuate racism so experts can fix these issues. The first step in increasing transparency is understanding the systems in which the algorithms operate. 

“If we’re developing algorithms based on externally flawed historical systems, then we need to acknowledge that,” Lee said. “A criminal justice algorithm could be extremely accurate, but how trustworthy is it in a system that we know targets people of color?” 

Some companies started to change their views on AI discrimination in the aftermath of last summer’s Black Lives Matter protests, said Adam Wenchel, a UMD alum and CEO of Arthur.

Wenchel said that companies are more aware than ever that they will be held accountable for supporting AI that perpetuates racial stereotypes. Many companies have turned to Arthur to integrate algorithmic technology responsibly after the backlash incurred by companies at the forefront of racist facial recognition software, such as Amazon and Google. 

Raji reinforced this optimism by emphasizing how her colleagues dismissed the first instances of racial bias in facial recognition that she noticed. However, now that a conversation has started about racial discrimination in algorithms, regulation is no longer out of reach. 

Even though the issues in AI technology may seem like they won’t affect the average UMD student, AI already affects every aspect of a student’s daily life, according to Kate Atchinson, the event’s coordinator. 

“It doesn’t matter what field you’re in because the intersection of AI and technology is everywhere,” Atchinson said. “To be an engaged citizen, you have to understand that the bias in these algorithms that you use every day,” 

In the closing moments of the panel, Lee echoed Atchinson’s sentiment. 

“Technology is here to stay. It’s not going anywhere. We’ve either got to get [AI] right in a multi-stakeholder conversation, or get [AI] right by putting a stop to it if it continues to harm people.” Lee said. “We need to make sure people maintain their dignity and respect while being able to benefit from technological advancements.”

Featured image: Scholars in technology and sociology discussed solutions to racial biases in AI on Zoom Friday, March 26. Photo by Hannah Ziegler

Leave a comment