7 min read

Unmasking AI: From MIT to the Frontlines of Justice

In the past, SOS meant "save our ship" or "save our souls" in an emergency, but today Buolamwini's calling on us to "save our humanity."
Unmasking AI: From MIT to the Frontlines of Justice
Joy Buolamwini, center, and Ryan Chayka speak with Regina G. Barber at the 2024 Library of Congress National Book Festival, August 24. Photo by Shawn Miller/Library of Congress.

Ring the alarm.

Dr. Joy Buolamwini repeatedly embeds SOS signals throughout her academic memoir, Unmasking AI: My Mission to Protect What Is Human in a World of Machines, which takes us through her personal journey.

In the past, SOS meant "save our ship" or "save our souls" in an emergency, but today Buolamwini's calling on us to "save our humanity."

She has been actively using every possible means to build public awareness for the known but insufficiently addressed biases corrupting artificial intelligence (AI) systems since discovering what she calls the "coded gaze" as a graduate student at the Massachusetts Institute of Technology (MIT) in 2015.

In its simplest terms, the coded gaze refers to the inability of computer vision software built using AI to consistently and accurately recognize individuals with darker skin, regardless of gender.

Half the world (that's close to 4 billion people) fit the definition of humans with darker-skinned complexions who are likely to feel, of have felt, the negative consequences of AI systems.

What are those consequences?

  • Police departments are using facial recognition systems with known high error rates, leading to the arrest of the wrong Black man or woman on too many occasions.
  • Medical imaging systems using AI, which are steadily being relied upon more and more to detect early-stage skin cancer, yet don't perform nearly as well on men and women with darker skin.

These examples are just a few of the numerous issues shared in Unmasking AI, and they remain under-addressed.

Buolamwimi Advocates for Human-Centered Technological Progress

Academics and researchers alike often write above the knowledge level and experiences of a portion of their readers. Buolamwimi takes a different track.

Whether she's talking about algorithms, computer vision, methods of data collection, or large language models (LLMs), she eases you into the room and sits with you.

Then she explains things at a level where all can understand. Buolamwimi views poverty, climate change, and a range of humanity's most pressing issues as human matters, not technical matters.

She believes higher levels of human empathy will have a far greater impact than more expansive and capable technology.

LLMs may offer innovative approaches and solutions to address longstanding problems in the future, such as universal access to healthcare.

The challenge.

Would the same political leaders who passed the 'Big Beautiful Bill,' which will lead to millions of poorer American citizens losing their Medicaid coverage over the next few years, have the desire and humanity to take innovative new approaches?
a woman is reading a resume at a table
Photo by Resume Genius / Unsplash

How the Past Lives in Company Data

Millions of us today will find just the right, and sometimes not-so-right, job we're ready to go after through LinkedIn, Indeed, or another popular job board.

We'll spend a couple hours customizing our resume and drafting a cover letter. The LLM savvy will create the perfect prompt and iterate on ChatGPT, and shrink the time from two hours to thirty minutes.

In no time, you're pushing "submit" on the company's website, and off your resume goes into the black hole.

What's on the other end of the black hole?

Approximately 88% of companies deploy some form of AI or automation for initial candidate screening. (World Economic Forum)

Humans aren't screening resumes in the black hole. Honestly, you know that.

Active job hunters are told to optimize their resumes for applicant tracking system (ATS) software, and we're doing just that. But, that's where your control ends.

You have zero visibility into what company data might have been used to train the model that identifies and selects the best candidates to interview for the role.

Buolamwimi breaks down how machine learning works to train a model using private company data and how that process resulted in Amazon's test model "screening out" qualified female candidates for various roles. Even with awareness and testing, Amazon engineers were unable to compensate for the gender bias inherent in their existing data.

Amazon never went live with this system. However, given the sheer prevalence of ATS systems in today's hiring process – the public needs increased visibility and greater transparency related to how these systems are trained.

Buolamwimi frames power shadows as "the past dwelling in our data."

Counting Faces, Searching for Belonging

If you're able to talk "off the record" with a Black colleague, because you personally know them well enough to have an off-the-record conversation, ask them what is one of the first things they check when they start a new job with a new employer.

Assuming we know each other like that, I'll tell you, "It's counting how many other Black faces surround me."

Candidly, it's encouraging to see four other Black professionals on the same team, even just the same floor.

During my professional career, I spent years on a team of approximately 70, as the only Black man with two Black women joining me.

Journey with me for a second. Imagine there's a new role for a social media manager on a team with the exact number of team members and the same composition as my old team.

A newly implemented ATS system has been rolled out to streamline the hiring process.

The model guiding the new system was trained on:

  • Three years of historical data labeled as "good hire" vs "not hired" (or rejected).
  • The system has ingested data fields such as job titles, education, school, skills, and years of experience for actual hires.

Furthermore, there are two years of data to determine which hires have performed exceptionally since their initial hire date based on performance review rankings.

The demographics for the 70 member team are:

  • 27 White males
  • 20 White females
  • 10 Asian males
  • 5 Asian females
  • 2 Hispanic makes
  • 2 Hispanic females
  • 2 Black females
  • 1 Black male
  • 1 Native American male

Suppose I submit my resume to the black hole, which feeds into the ATS system, and I self-identify as a Black male.

What are the chances that the ATS system will tag me as an ideal candidate to interview for the role?
white and silver round device
Photo by Brock Wegner / Unsplash

The Allure of Safety: Trading Freedom for Surveillance

Buolamwimi has staunchly opposed facial recognition systems for a myriad of valid reasons for quite some time. She has joined other academics, researchers, and privacy advocates in halting the broad-scale rollout of these systems, and succeeded temporarily in having a moratorium imposed.

Fearmongers are seductively good at pitching and framing the need for additional surveillance under the guise of crime prevention.

Today, more than ever before, Americans are willing to surrender their rights to digital privacy and real-world privacy for the illusion of safety.

Buolamwimi says facial recognition systems, when used as intended by governments and companies, are destined to be employed for social control the moment an incident occurs that scares Americans.

The slippery slope we're on now began with the September 11th attacks.

Now, when these facial recognition systems aren't working as intended, an elaborate game of mistaken identity begins. Leading oftentimes to people of color being arrested for crimes they never committed, legally detained, and a host of other injustices.

In both cases, this ends up being a lose-lose proposition, with the only true winners being the technology companies that profit immensely from the systems they sell to governments, police departments, and businesses for millions.

Worth the Read

For the 95 percent of us who have never worked inside a technology company and have never held a technology-related role, Unmasking AI will broaden your understanding of the central issues being discussed inside and outside the technology sector.

Unmasking AI is vital for younger readers, particularly those between the ages of 13 and 18, who have aspirations for pursuing careers in technology.

Buolamwimi has the rare gift of immense knowledge combined with her ability to explain complex concepts in simple and clear ways. ■


Visit the Algorithmic Justice League website, and dig into the Library page. It will expand your knowledge of these issues and cover what they're doing now.


I am part of the affiliate program with Bookshop.org. If you decide this book is a must read for you, the link below will allow you to purchase Unmasking AI. I may earn a commission at no extra cost to you.

Unmasking AI: My Mission to Protect What Is Human in a World of Machines
My Mission to Protect What Is Human in a World of Machines