Bookmarked Article

Is A.I. already biased?

MMM DD YYYYWORDS BY LORRAINE LAU
Geoffrey Hinton’s warnings about an A.I. takeover has placed him in the global spotlight — but have other voices been swept under the rug?

Hinton, known to many as the “godfather of A.I.,” made international headlines when he quit Google to speak up on the unmitigated risks of generative A.I., with emphasis on the “existential risk” of a superior digital intelligence. Yet for former colleagues who lost their jobs for voicing A.I. concerns, Hinton’s statement is too detached from the realities of the field.

 

Ex-Google researcher Meredith Whittaker argues that Hinton’s alarmism about a potential A.I. takeover obscures the technology’s real-world harms against marginalized groups. At present, these problems remain subject to the management of humans, or, more precisely, the tech giants behind A.I. systems.

 

“[The focus on A.I. consciousness is] distracting us from the fact that these are technologies controlled by a handful of corporations who will ultimately make the decisions about what technologies are made, what they do, and who they serve.”

A.I. whistleblowers before Geoffrey Hinton

Despite leaving Google to speak freely, Hinton praises his former company for having acted “very responsibly” in technological innovations. Whittaker finds this claim misleading, noting the company’s record of dismissing and even silencing internal calls to address A.I. risks.

 

In 2020, Timnit Gebru was fired from Google’s Ethical A.I. team for refusing to withdraw a co-authored research paper on the risks of encoded biases in large-scale language models. Noting how “people in positions of privilege with respect to a society’s racism, misogyny, ableism, etc., tend to be overrepresented” in training data, the article explores how algorithms may be skewed to reflect such prejudices when relying on increasing amounts of data. Consequently, dominant voices are amplified in language models, which underpin services such as Google’s search engine BERT.

 

Google rejected the article on the grounds that it did not sufficiently reference the company’s recent research on reducing these problems. When Gebru attempted to negotiate with her superiors, the Black in A.I. founder was controversially terminated, soon followed by her team co-lead Margaret Mitchell. For the 3,000 Google employees petitioning in support of Gebru, the management’s actions betrayed not only a lack of transparency and accountability, but also a systemic neglect of marginalized experiences.

 

Algorithmic biases may be the symptom of an I.T. workforce traditionally dominated by White men, with products primarily designed by, and tested on, White or male subjects. A 2018 study, published by Gebru and Joy Buolamwini, found that three facial recognition tools scored a disproportionately high error rate for identifying dark-skinned women. If used in law enforcement, these software risked unfairly targeting women and People of Colour. Gebru and Buolamwini’s research was effective in stopping IBM, Microsoft, and Amazon from selling their technologies to the U.S. law enforcement.

 

As the first Black woman hired as a Google researcher, Gebru has long campaigned for uplifting marginalized voices in A.I. research, noting the serious repercussions of an uneven playing field:

 

“The people creating the technology are a big part of the system. If many are actively excluded from its creation, this technology will benefit a few while harming a great many.”

 

What counts as an “existential risk”?

For Gebru and Whittaker, Hinton’s lauded whistleblowing rings hollow, given his continued lack of support for fired researchers. With his heavyweight status, Hinton could have been a powerful advocate had he spoken up earlier and joined forces with Gebru, or highlighted her concerns when speaking on various international media platforms. Yet in an interview on CNN, Hinton claims that Gebru has “rather different concerns” that are less “existentially serious” than his own. Whittaker criticizes Hinton’s take as a disregard of issues that do not impact him.

 

“I think it’s stunning that someone would say that the harms that are happening now—which are felt most acutely by people who have been historically minoritized: Black people, women, disabled people, precarious workers, et cetera—that those harms aren’t existential.”

 

Indeed, the quick rise of generative A.I. have led to troubling reports in recent months, including cases of non-consensual pornography of women generated via A.I. models. Even as companies have sought to increase moderation of A.I. models, loopholes still exist in ChatGPT that allow the chatbot to write hate speech under subtle prompts.

 

It should be noted that Hinton has commented on human responsibility in A.I. regulation. He highlights the threat of generative A.I. in the hands of “bad agents” such as Russian President Vladmir Putin, who could use chatbots for disseminating misinformation and anti-democratic political maneuvering. However, for the women researchers forced out of Google, the bad agent may not be an individual but rather a flawed system.

 

Moving forward for change

For now, scientists agree that digital intelligence lacks the capacity for independent thought. Instead, encoded biases in A.I. systems are projecting and amplifying the systemic inequities of human society. As experts work to regulate A.I. advancements, more attention should be given to the voices behind the technologies, so that the solutions can safeguard the rights of all.