Empower users and employees to fight AI threats, says Signal President

SAN JOSE, June 5 — Privacy laws and work organization offer the best chance to curb the growing power of big tech companies and combat key threats to artificial intelligence, said a leading AI researcher and executive.

Advertising

Advertising

Current efforts to regulate AI risk being overly influenced by the tech industry itself, said Meredith Whittaker, president of the Signal Foundation, ahead of RightsCon, a major digital rights conference in Costa Rica this week.

“If we have a chance for meaningful regulation, it will come from building power and making demands on the people who are most at risk,” she told the Thomson Reuters Foundation. “For me, these are the front lines.”

More than 350 top AI executives, including OpenAI CEO Sam Altman, joined pundits and professors last week in highlighting the “risk of extinction from AI” that they equate to policymakers with the risks of pandemics and nuclear war should.

But for Whittaker, these doomsday predictions overshadow the existing damage that certain AI systems are already doing.

“Many, many researchers have carefully documented these risks and stacked the evidence,” she said, citing work by AI researchers like Timnit Gebru and Joy Buolamwini, who were the first to document racial bias in AI-powered facial recognition systems over five years ago .

A recent Electronic Privacy Information Center (EPIC) report on AI harms cites the labor abuse of AI annotators in Kenya who help build predictive models, the environmental cost of computing power to build AI systems, and the proliferation of AI generated propaganda, among other concerns.

curb power

When Whittaker left her job as an AI researcher at Google in 2019, she wrote an internal note warning about the development of AI technology.

“The use of AI for social control and repression is already emerging,” said Whittaker, who clashed with Google over the company’s AI contract with the US military and the company’s handling of sexual harassment claims.

“We only have a short window of opportunity to act and build real guardrails for these systems before AI is integrated into our infrastructure and it’s too late.”

Google didn’t respond to a request for comment.

Whittaker sees the current AI boom as part of the “surveillance derivatives business” that has monetized the web’s vast collection of user-generated information to create powerful predictive models for a small group of companies.

According to research by the Washington Post, popular generative AI tools like ChatGPT are trained on vast amounts of internet data, ranging from text from Wikipedia entries to patent databases and World of Warcraft gamer forums.

Social media companies and other tech companies are also building AI and predictive systems by analyzing the behavior of their owners.

Whittaker hopes the encrypted messaging app Signal and other projects that don’t collect or harvest their users’ data can help curb the concentration of power among a few powerful AI developers.

For Whittaker, the rise of powerful AI tools suggests the growing concentration of power in a small group of technology companies able to make the sizeable investments in data collection and computing power such systems require.

“We have a handful of companies that…arguably have more power than many nation-states,” said Whittaker, who will be speaking about privacy-focused apps and encryption at RightsCon, hosted by digital rights group Access Now.

“We are giving away more and more decision-making authority, more and more power over our future – who benefits and who loses – to a small group of companies.”

push back

Whittaker hopes for more regulatory oversight of AI — but is concerned that those regulators are being overly influenced by the industry itself.

In the US, a group of federal agencies announced in April that they would monitor the burgeoning AI space for bias in automated systems and misleading claims about AI systems’ capabilities.

The EU in May agreed on a tougher bill, also known as the AI ​​Act, that will classify certain types of AI as “high risk” and require companies to share data and risk assessments with regulators.

“I think everyone’s fighting,” said Whittaker, who was a senior advisor on AI at the US Federal Trade Commission before joining Signal in 2022.

She sees success in privacy-focused regulation that aims to limit the amount of data companies can collect, thereby depriving AI models of the raw materials they need to build increasingly powerful systems.

Whittaker also cited the work of union organizers, including recent calls from the Writers Guild of America (WGA) and Screen Actors Guild (SAG) to restrict the use of generative AI technologies like ChatGPT in their workplaces. — Thomson Reuters Foundation