Coded Bias is a feature-length documentary that looks at how bias creeps into technology, specifically addressing bias in algorithms. The film, which first premiered at Sundance 2020, is now playing virtually across the U.S. until Feb. 12., and will begin streaming in Canada via Hot Docs at Home on Feb. 11.

The documentary, directed by Shalini Kantaya, follows MIT computer scientist Joy Buolamwini as she investigates the bias in facial recognition technology. It opens with Buolamwini talking about her research and what piqued her interest in AI bias.

Buolamwini explains her interest in developing an “aspire mirror” for a project at MIT. The “aspire mirror” would use facial recognition technology and replace Buolamwini’s face with an image of someone that inspires her, such as Serena Williams.

Buolamwini says she put a camera on top of the mirror and got computer vision software to track her face. But the software “didn’t really work” until she put on a white mask, she says. Not knowing why the software didn’t recognize her, Buolamwini sets out to investigate, leading us through her research and conversations on algorithmic bias and the role of AI in society that tends to favour light skinned men.

“We [often] teach machines to see by providing training sets or examples of what we want it to learn,” she says, adding that this is done by compiling images that mirror those training sets, which are often rooted in bias.

The doc introduces a roster of like-minded data luminaries, including Cathy O’Neil, author of Weapons of Math Destruction; Dr. Safiya Umoja Noble, associate professor at UCLA, who specializes in algorithmic discrimination and technology bias; Silkie Carlo, director of Big Brother Watch, which monitors facial recognition technology being used by U.K. police; and Ravi Naik, a leading lawyer in the field of data protection and data rights.


Shalini Kantaya

Photo: Coded Bias

Coded Bias discusses key companies in the U.S. and China where AI is being developed, albeit along two different tracks.

While China has a more structured system when it comes to access of data, the U.S. has “no detailed point of view on AI,” says futurist Amy Webb. She notes that China uses the data to allow or deny people of things (i.e. maintain social order). Whereas the U.S., she says, develops AI “not in the public interest but rather for commercial applications and to earn revenue.”

Another difference, Webb says, is that China is transparent about its practices.

The film also captures where surveillance is happening, taking us to the streets of London with Silkie Carlo of Big Brother Watch. It also delves into decision-making as a result of AI; how apps and algorithms are quickly determining who is credible, as well as how racism is becoming mechanized.

Carlo shows us an example of a person getting mistakenly identified. And she captures the reactions of those dealing with being constantly monitored.

We hear a lot from Cathy O’Neil, who stresses how important it is to know ‘who owns the code’ and in turn the power to decide our lives. After all, the decisions that AI makes are not ethical, rather mathematical.

There’s mention of Amazon’s decision to eliminate its recruiting process, which was AI-driven, after realizing it was biased against women and eliminated prospects from women’s colleges and sports teams.

And the film re-introduces Tay, the AI chatbot released by Microsoft that was shut down 16 hours after launch because it started posting a slew of inflammatory tweets online.

The documentary makes sure to show how and when AI has gone wrong, with numerous examples that highlight injustices and the danger facial recognition technology poses. Coded Bias screens digitally while playing out in real life, too, and reports like this one from Northeastern University are further scrutinizing algorithmic fairness, soft biometrics (notions of identity not unique to an individual), and the racial categories used across datasets.

Coded Bias is a film that investigates how AI is shaping the way people are treated, along with the algorithmic impact and oppression faced as a result of the technology. It also presents stats such as less than 14 per cent of AI researchers are women, among other concerning findings.

But there’s hope. Buolamwini has now helped to usher in transparency around facial analysis technology on a global scale, with over 230 articles published citing her work. She also formed the Algorithmic Justice League to push for better legislation and protections for humans when it comes to their privacy.

While highlighting AI’s connection to science fiction, Coded Bias stresses that we’re not living within a fun science fiction movie—and never have been.