As Google unveils $25 million project to focus its A.I. on United Nations sustainable development goals, the company’s new “technology and society” czar says firing of A.I. ethics researcher was ‘unfortunate’

James Manyika, a top Google executive who oversees the company’s efforts to think deeply about the impact of its technology on society, says the tech giant’s 2020 firing came from Timnit Gebru, a prominent researcher working on the ethics of artificial intelligence, and one of the few black women in the company’s research department, is “unfortunate”.

“I find it unfortunate that Timnit Gebru left Google under these circumstances, you know, maybe it could have been handled differently,” he said.

Manyika was not working at Google when the company fired Gebru. Google hired him in January for a new role as senior vice president of technology and society, reporting directly to Alphabet’s chief executive officer, Sundar Pichai. Originally from Zimbabwe, Manyika is a distinguished computer scientist and roboticist. He spent more than two decades as a top partner at McKinsey & Co., advising Silicon Valley companies and serving as director of the company’s in-house think tank, the McKinsey Global Institute. He was a member of President Barack Obama’s Global Development Council and is currently Vice Chair of the US National AI Advisory Committee, which advises the Biden administration on AI policy issues.

Manyika only spoke along wealth ahead of Google’s announcement today of a $25 million pledge aimed at advancing the United Nations Sustainable Development Goals by enabling non-governmental groups to access AI as they apply for funding.

Google made the announcements in connection with the opening of the UN General Assembly this week in New York. The company said that in addition to money, it would support organizations it selects for grants with engineers and machine learning researchers from its charity arm to work on projects for up to six months.

The company started supporting NGOs working on the UN Sustainable Development Goals in 2018. Since then, the company claims to have helped more than 50 organizations in almost every region of the world. It has helped groups monitor air quality, develop new antimicrobials and work on ways to improve the mental health of LGBTQ+ youth.

Manyika’s hiring comes as Google has sought to improve its image among the general public and among its own employees in relation to the company’s commitment to technology ethics and racial diversity. Thousands of Google employees signed an open letter protesting Gebru’s firing, and Pichai apologized, saying that the way the company handled the matter “had prompted some in our community to do so.” to question their place on Google”. Nonetheless, months later the company also fired Gebru’s colleague and co-lead of the AI ​​ethics group, Margaret Mitchell. At the time, it said it was restructuring its teams working on ethics and responsible AI. These teams now report to Marian Croak, a Google vice president of engineering, who in turn reports to Jeff Dean, Google’s head of research. Croak and Manyika are both black.

Since joining Google, Manyika has been impressed by the seriousness with which Google takes its commitment to responsible AI research and deployment and processes to address ethical concerns. “It strikes me how much fear and discussion there is about using technology and trying to do it right,” he said. “I wish the outside world knew more about it.”

Manyika says that while it’s important to be vigilant about ethical concerns surrounding AI, it’s also risky to blind people to the tremendous benefits AI could bring, especially to disadvantaged groups, for fear of potential negative consequences. He was basically a techno-optimist, he made it clear. “There’s always been this asymmetry: we get over the amazing mutual benefits and benefits very quickly, except maybe for a few people who keep talking about it, and we focus on all these concerns and downsides and the complications,” he said. “Well, half of them are really complications of society itself, aren’t they? And yes, some of them actually don’t quite work as intended due to the technology. But we very quickly focus on that side of things without thinking about it, are we actually helping people? Do we provide useful systems? I think it will be extraordinary how helpful these systems will be to complement and augment what people are doing.”

He said a good example of this is ultra-large language models, a type of AI that has led to amazing advances in natural language processing in recent years. Gebru and a number of other ethical researchers have criticized these models, which Google has invested billions of dollars in creating and marketing, and Google’s refusal to allow her and her team to publish a research paper highlighting ethical concerns about these large language models , sparked off the incident that led to her dismissal.

Ultra-large language models are trained with huge amounts of written material from the Internet. The models can learn racial, ethnic, and gender stereotypes from this material and then perpetuate these biases when used. They can trick people into thinking they are interacting with a person rather than a machine, increasing the risk of deception. They can be used to spread misinformation. And while some computer scientists see ultra-large language models as the path to a more human-like AI, which has long been considered the holy grail of AI research, many others are skeptical. The models also require a lot of computer power to train, and Gebru and others have criticized the associated carbon footprint. Faced with all of these concerns, one of Gebru’s collaborators in her research on large language models, Emily Bender, a computational linguist at the University of Washington, has suggested companies stop developing ultra-large language models.

Manyika said he was prepared for all of those risks, but he didn’t agree that work on the technology should stop. He said Google is taking many steps to limit the dangers of using the software. For example, he said the company has filters that check the output of large language models for toxic language and factual accuracy. He said these filters appear to be effective in tests so far: when interacting with Google’s most advanced chatbot, LaMDA, people flagged less than 0.01% of the chatbot’s responses for using toxic language. He also said that Google has also been careful not to publicly release its most advanced language models, as the company is concerned about potential abuse. “If you want to build powerful things, do the research, do the work to try and understand how these systems work, rather than throwing them out into the world and seeing what happens,” he said.

But he said not working on the models would mean depriving people, including those most in need, of vital benefits. For example, such models would have enabled, for the first time, automatic translation of “resource-poor” languages ​​for which relatively little written material exists in digital form. (Some of these languages ​​are only spoken, not written; others have a written form, but little material has been digitized.) These include languages ​​such as Luganda, spoken in East Africa, and Quechua, spoken in South America. “These are languages ​​spoken by many people, but they are languages ​​with few resources,” Manyika said. “Before these large language models and their capabilities, it would have been extraordinarily difficult, if not impossible, to translate from these resource-poor languages.” Translation enables native speakers to connect to the rest of the world via the internet and to global in a way communicate like they’ve never done before.

Manyika also highlighted many of the other ways Google is using AI to benefit society and global development. He pointed to the work the company is doing in Ghana to try to more accurately predict locust outbreaks. In Bangladesh and India, the company is working with the government to better predict floods and provide alerts on the mobile phones of people with advance warning who have already saved lives. He also referenced DeepMind, the London-based AI research firm owned by Alphabet. Recently, it used AI to predict the structure of almost every known protein and published it in an open-access database. He said such fundamental advances in science would ultimately lead to a better understanding of diseases and better medicines, and could have a major impact on global health.

Sign up for the wealth Includes an email list so you don’t miss our biggest features, exclusive interviews and investigations.