Big tech companies have shed staff from teams involved in evaluating ethical issues related to the use of artificial intelligence, raising concerns about the safety of the new technology given its widespread use in consumer products.
Microsoft, Meta, Google, Amazon and Twitter are among the companies that have eliminated members of their “responsible AI teams” that advise on the safety of consumer products that use artificial intelligence.
The workforce affected remains in the dozens, representing a small fraction of the tens of thousands of tech workers eliminated in response to a broader industry downturn.
The companies said they remained committed to launching secure AI products. But experts say the cuts remain a concern as potential abuses of the technology are uncovered while millions of people begin experimenting with AI tools.
Their concerns have grown following the success of the Microsoft-backed ChatGPT chatbot launched, powered by OpenAI, which prompted other tech companies to release rivals like Google’s Bard and Anthropic’s Claude.
“It’s shocking how many members of responsible AI are being fired at a time when arguably more of these teams are needed than ever before,” said Andrew Strait, former ethics and policy researcher at Alphabet-owned DeepMind and associate director of the Research organization Ada Lovelace Institute.
Microsoft in January disbanded its entire Ethics and Society team, which led the company’s initial work in this area. The tech giant said the cuts totaled fewer than 10 roles and Microsoft still has hundreds of employees in its Responsible AI office.
“We’ve significantly expanded our responsible AI efforts and worked hard to institutionalize them across the organization,” said Natasha Crampton, Microsoft’s chief responsible AI officer.
Twitter has cut more than half of its headcount, including its small ethical AI team, under its new billionaire owner Elon Musk. His previous work included fixing a bias in the Twitter algorithm that seemed to favor white faces when selecting cropping of social network images. Twitter did not respond to a request for comment.
Twitch, Amazon’s streaming platform, downsized its ethical AI team last week, blaming all teams working on AI products for issues related to bias, according to a person familiar with the move. Twitch declined to comment.
In September, Meta dissolved its lead innovation team of about 20 engineers and ethicists tasked with evaluating civil rights and ethics on Instagram and Facebook. Meta did not respond to a request for comment.
“Responsible AI teams are one of the only internal bastions that Big Tech needs to ensure the people and communities impacted by AI systems are in the minds of the engineers building them,” said Josh Simons, former Facebook AI ethicist and author of Algorithms for the People.
“The speed at which they are being phased out is leaving Big Tech’s algorithms at the mercy of advertising and undermining the well-being of children, vulnerable people and our democracy.”
Another concern is that large language models underlying chatbots like ChatGPT are known for “hallucinating” – making false statements as if they were facts – and contributing to nefarious purposes such as spreading disinformation and fraud exams can be used.
“What we’re starting to realize is that we can’t predict all the things that are going to happen with these new technologies, and it’s crucial that we pay some attention to them,” said Michael Luck, director of King’s College London’s Institute for Artificial Intelligence .
The role of internal AI ethics teams has been questioned amid debates over whether human intervention in algorithms should be more transparent with input from the public and regulators.
In 2020, meta-owned photos app Instagram set up a team to look at “algorithmic justice” on its platform. The IG Equity team was formed in the wake of the police killing of George Floyd and a desire to make adjustments to Instagram’s algorithm to encourage racial discussions and highlight marginalized profiles.
Simons said: “You have the power to step in and change those systems and prejudices [and] Explore technological interventions that promote justice. . . but engineers should not decide how society is shaped.”
Some employees tasked with ethical AI oversight at Google have been laid off as part of broader cuts at Alphabet of more than 12,000 employees, according to a person close to the company.
Google wouldn’t specify how many roles have been eliminated, but this responsible AI remains “a top priority at the company and we continue to invest in these teams.”
The tension between the development of AI technologies in companies and their impact and security has already occurred at Google. Two AI ethics research leaders, Timnit Gebru and Margaret Mitchell, left the company in 2020 and 2021, respectively, after a high-profile dispute with the company.
“It’s problematic when responsible AI practices are deprioritized for competition or for a push to market,” said Strait of the Ada Lovelace Institute. “And what I’m seeing now, unfortunately, is exactly what’s happening.”
Additional reporting by Hannah Murphy in San Francisco