Timnit Gebru on the dangers of artificial general intelligence

Timnit Gebru ’08 MA ’10 PhD ’17, a leader in the movement for diversity in technology and artificial intelligence, spoke on the dangers of AI at an event sponsored by the Symbolic Systems division Wednesday night. Gebru is the Founder and Executive Director of the Distributed Artificial Intelligence Research Institute (DAIR) and co-founder of the non-profit organization supporting black visibility and inclusion, Black in AI.

In December 2020, Gebru was fired from her position as co-lead of Google’s Ethical AI research team after refusing to retract an unpublished article on the dangers of Large Language Models. These algorithms, which Gebru had argued carry the risk of financial damage and bias, use large datasets to translate and generate text.

She founded the DAIR Institute in 2021, which aims to mitigate the harms of AI. According to its website, DAIR is “rooted in the belief that AI is not inevitable, its harms are avoidable, and when its production and deployment involve different perspectives and conscious processes, it can be beneficial.”

Gebru’s lecture focused on the risks of artificial general intelligence (AGI). AGI can do whatever tasks it is asked to do, while AI is only supposed to do specific tasks. “Why try to build an undefined system that sounds like a god?” she asked the audience of approximately 150 Stanford students and associates. “[Big tech] build systems as if they had a model for everything. They can’t, and even if they could, is that really what we want?”

Gebru has published articles on the exploitation of workers behind AI, as well as the potential for abuse of the burgeoning technology. She pointed to text-to-image models being used to run harassment campaigns, create “deep fakes” and overly sexualize women and girls.

READ :  The increased spending on defense will push the AI

“My question is, utopia for whom? Who gets this utopian life like that [big tech] is promising [will come from AI]?” said Gebru.

Gebru drew parallels between AI, eugenics, and transhumanism (the latter referring to enhancing human longevity and cognition). “[AGI] has roots in first wave eugenics […] Second both [AGI and eugenics] talking about utopia and apocalypse,” she said. Gebru has expressed concern that AGI will be promoted by “paradise engineers” as a means to end all human suffering forever, while others fear that humans will lose control of AGI and find themselves in an apocalyptic scenario. She argued that transhumanism is inherently discriminatory because it defines what an enhanced human is like and creates a hierarchical conception that mimics first-wave eugenics.

According to Gebru, AI should consist of “well-outlined, well-defined systems.” She said the focus should be shifted away from AGI. “For me, trying to build AGI is an inherently unsafe practice […] We build what we want to build and we have to remember that,” Gebru said.

Audience members, including Carolyn Qu ’24, commented on Gebru’s influence and insight into the diverse fields of technology and AI. “As a Symbolic Systems student, having her here is really valuable because I feel like there’s this unique intersection of technology and humanism […] she [has] Experience [with]’ Qu said.

Tiffany Liu ’23 and a team of consulting grantees from Symbolic Systems had been organizing the event since senior year. “We really felt that a lot of the work she’s doing, particularly at DAIR, aligns with our vision of what Symbolic Systems students could participate in,” Liu said.

READ :  It is ironic to call ourselves homo sapiens while undermining the very basis of our survival: Yuval Noah Harari

Gebru stressed the importance of tackling unethical practices in big tech and AI, including worker exploitation. “In terms of these tools, if they have to be ethical, many of these organizations will decide it’s not worth it […] I think the first step is to compensate everyone appropriately,” Gebru said. “What we should fight is the creation of unsafe products, the exploitation of workers and the centralization of power.”