“Until meaningful government safeguards are put in place to protect the public from the harms of generative AI, we need a break.”
According to a report on the dangers of artificial intelligence (AI) published by Public Citizen on Tuesday. Titled Sorry in advance! Rapid Rush to Deploy Generative AI Risks a Wide Array of Automated Harms, researchers Rick Claypool and Cheyenne Hunt’s analysis aims to “reframe the conversation about generative AI to ensure the public and policymakers have a say.” how these new technologies could turn our lives upside down.”
Following OpenAI’s release of ChatGPT in November, generative AI tools “have garnered a lot of buzz — particularly among big tech companies that are best positioned to benefit from them,” the report notes. “The most enthusiastic proponents say AI will change the world in ways that will make everyone rich — and some critics say it could kill us all. Aside from frightening threats that could arise as technology advances, the real world hurts the publishing and monetization rush these tools can cause—and in many cases are already causing.”
Claypool and Hunt categorized these damages into “five major problem areas”:
Damaging Democracy: Spambots spreading misinformation are not new, but generative AI tools are enabling bad actors to mass-produce misleading political content. Increasingly powerful AI tools for audio and video production make it difficult to distinguish authentic content [from] synthetic content. Consumer Concerns: Companies trying to maximize profits with generative AI are using these tools to devour user data, manipulate consumers, and focus benefits on the largest companies. Scammers use them to participate in increasingly sophisticated rip-off schemes. Exacerbating inequality: Generative AI tools risk exposing systemic biases such as [as] racism [and] Sexism. They offer bullies and abusers new avenues to harm victims and, if their widespread use proves momentous, risk significantly accelerating economic inequality. Hired workers abroad to filter out disruptive and offensive content. Automating media creation, as some AIs do, risks reducing human-performed media production work and replacing the demands with advances in efficiency. Mass deployment is expected to require some of the biggest tech companies to increase their computing power — and therefore their carbon footprint — by four or five times.
In a statement, Public Citizen warned that “companies are deploying potentially dangerous AI tools faster than their harms can be understood or mitigated.”
“History provides no reason to believe that corporations can self-regulate known risks — especially since many of those risks are as much a part of generative AI as corporate greed,” the statement continued. “Companies rushing to adopt these new technologies are gambling with people’s lives and livelihoods and arguably with the very foundations of a free society and a livable world.”
On Thursday, April 27, Public Citizen is hosting a hybrid in-person/Zoom conference in Washington, DC where US Rep. Ted Lieu (D-Calif.) and 10 other panelists will discuss the threats posed by AI and how to address them Rein in the fast-growing but virtually unregulated industry. Interested parties must register by this Friday.
“Companies rushing to adopt these new technologies are gambling with people’s lives and livelihoods and arguably with the very foundations of a free society and a livable world.”
Demands for regulation of AI are increasing. Last month, Geoffrey Hinton, dubbed the “godfather of artificial intelligence,” likened the potential impact of rapidly advancing technology to “the industrial revolution, or electricity, or maybe the wheel.”
When asked by CBS News’ Brook Silva-Braga about the possibility of technology “wiping out humanity,” Hinton warned that “it’s not unthinkable.”
This frightening potential doesn’t necessarily lie in existing AI tools like ChatGPT, but rather in the so-called “Artificial General Intelligence” (AGI), through which computers develop and implement their own ideas.
“Until recently, I thought it would be 20 to 50 years before we had general purpose AI,” Hinton told CBS News. “Now I think it could be 20 years or less.” Finally, Hinton admitted he wouldn’t rule out the possibility of AGI arriving within five years — a big departure from a few years ago when he “would have said, ‘No way’.”
“We have to think carefully about how we can control this,” Hinton said. When asked by Silva-Braga if that was possible, Hinton said: “We don’t know, we haven’t been there but we can try.”
The AI pioneer is far from alone. In February, OpenAI CEO Sam Altman wrote in a company blog post, “The risks could be extraordinary. A misaligned super-intelligent AGI could do great harm to the world.”
More than 26,000 people have signed a recent open letter calling for a six-month moratorium on training AI systems beyond OpenAI’s latest chatbot, GPT-4, although Altman is not among them.
“Powerful AI systems should only be developed when we are sure that their impact is positive and their risks are manageable,” the letter says.
While AGI may still be a few years away, Public Citizen’s new report makes it clear that existing AI tools — including chatbots that spit out lies, face-swapping apps that generate fake videos, and cloned voices that commit fraud — already causing or threatening to cause serious harm. These include deepening inequality, undermining democracy, displacing workers, exploiting consumers and deepening the climate crisis.
These threats “are all very real and are highly likely to emerge if organizations are allowed to deploy generative AI without enforceable guard rails,” write Claypool and Hunt. “But there’s nothing inevitable about them.”
Government regulations can prevent companies from adopting the technologies too quickly (or block them altogether if they prove unsafe). It can set standards to protect people from the risks. It can impose obligations on companies using generative AI to avoid identifiable harm, respect the interests of communities and creators, pre-test their technologies, and take responsibility and liability if things go wrong. It can demand that justice be built into technologies. It can insist that if generative AI does increase productivity and displace workers, or that the economic benefits will be shared with those affected and not concentrated in a small constituency of companies, executives and investors.
Amid “growing regulatory interest” in an AI “accountability mechanism,” the Biden administration announced last week that it is seeking public input on measures that could be implemented to ensure that “AI systems are legal, effective, ethical, are secure and otherwise trustworthy.”
According to Axios, Senate Majority Leader Chuck Schumer (DN.Y.) is “taking early steps toward legislation to regulate artificial intelligence technology.”
In the words of Claypool and Hunt, “We need strong safeguards and government regulation—and we need it before companies adopt AI technology widely. Until then we need a break.”