We have to create guidelines for AI

What if the only one you could truly trust was something or someone close enough to physically touch? This could be the world that AI is taking us into. A group of Harvard academics and artificial intelligence experts just published a report aimed at setting ethical guardrails around the development of potentially dystopian technologies like Microsoft-backed OpenAI’s seemingly sentient chatbot, which could be used in a new and “improved” (depending on your point of view) version, GPT-4, debuted last week.

The group, which includes Glen Weyl, an economist and Microsoft researcher, Danielle Allen, a Harvard philosopher and director of the Safra Center for Ethics, and many other industry figures, are sounding the alarm about “the profusion of experimentation with decentralized social technologies “. . This includes the development of “highly persuasive machine-generated content (e.g. ChatGPT)” that threatens to disrupt the fabric of our economy, politics and society.

They believe we have reached a “constitutional moment” of change that requires an entirely new regulatory framework for such technologies.

Some of the risks of AI, like a Terminator-style future where the machines decide the humans have had their day, are well-trodden terrain in sci-fi – which, it should be noted, has a pretty good record at it where science is predicting itself will go in the last 100 years or so. But there are others that are less well understood. For example, if AI can now generate a completely untraceable fake ID, what use is the legal and regulatory framework that relies on such documents to enable us to drive, travel, or pay taxes?

READ :  Robotics ETF Betting on Automation

One thing we already know is that AI could allow bad actors to impersonate anyone, anywhere, anytime. “One has to assume that in this new era, deception will become much cheaper and more common,” says Weyl, who co-authored an online book with Taiwan’s Digital Minister Audrey Tang. This lays out the risks that AI and other advanced information technologies pose to democracy, specifically that they transfer the problem of disinformation to steroids.

The potential impacts span all aspects of society and the economy. How do we know that digital transfers are safe or even authentic? Are online notaries and contracts reliable? Will fake news, already a huge problem, become essentially undetectable? And what about the political ramifications of the incalculable number of job disruptions, a topic that academics Daron Acemoglu and Simon Johnson will examine in a very important book later this year.

One can easily imagine a world where governments have difficulty keeping pace with these changes and, as the Harvard report puts it, ‘existing, grossly imperfect democratic processes are proving impotent. . . and are therefore abandoned by increasingly cynical citizens”.

We’ve already seen hints of this. The private Texas town being built by Elon Musk to house his SpaceX, Tesla and Boring Company employees is just the latest iteration of Silicon Valley’s libertarian fantasy, with the wealthy taking refuge in private facilities in New Zealand or move their wealth and businesses to extra-state jurisdictions and “special economic zones”. Wellesley historian Quinn Slobodian looks at the rise of such zones in his new book, Crack-Up Capitalism.

READ :  AI-Generated Seinfeld-Like Twitch "TV Show" Is The Pinnacle Of Absurdity

In this scenario, tax revenues fall, the labor rate erodes, and the resulting zero-sum world exacerbates an “exitocracy” of the privileged.

Of course, the future could also be a lot brighter. AI has incredible potential to increase productivity and innovation, and could even enable us to redistribute digital wealth in new ways. But it is already clear that companies will not stop developing cutting-edge Web3 technologies, from AI to blockchain, any time soon. They see themselves in an existential race for the future with each other and with China.

So they’re looking for ways to not only sell AI, but the security solutions for it as well. In a world where trust can’t be digitally authenticated, AI developers at Microsoft and other companies, for example, are pondering whether there might be a way to unlock more advanced versions of “shared secrets” (or things only you and one other close person have) to create may know) digitally and on a large scale.

However, that sounds a bit like solving the problem of technology with more technology. In fact, the best solution to the AI ​​problem, if there is one, might be analog.

“What we need is a framework for more circumspect vigilance,” Allen says, citing the 2010 report by the Presidential Commission on Bioethics, issued in response to the rise of genomics. It created guidelines for responsible experimentation that allowed for safer technological development (although one could point to new information about the possible laboratory leak in the Covid-19 pandemic and say that no framework is internationally foolproof).

READ :  Tesla Highlights Pace of Innovation; Get Exposure With ARKQ and ARKW

For now, instead of either banning AI or having a perfect method of regulation, we could start by forcing companies to disclose what experiments they are conducting, what worked, what didn’t, and where unintended consequences might arise. Transparency is the first step to ensure AI is not taking advantage of its makers.

[email protected]