Editor’s note: Editorials reflect the opinions of the Star Tribune’s editorial board, which operates independently of the newsroom.
•••
Disclaimer: No human will was violated in the creation of this editorial.
•••
Technological progress happens – or at least is constantly being attempted. If it succeeds, it is because it makes human effort more efficient and effective.
Based on these criteria, chat-based artificial intelligence, where people interact with a computer server in a language that appears completely natural, appears to be failing for the time being.
It couldn’t be more effective. Whether the goal of engagement is to explore an idea, get advice, or just chat to connect or entertain, AI can’t really do anything because it’s an unreliable interlocutor.
Therefore it is also inefficient. Anything it tells you must be considered preliminary. You can also parse a search engine’s long lists of potentially relevant links from the start, because that’s where you’ll end up anyway.
But anyone who has seen generative text in action can see the tremendous potential it offers. painful.
We spent some time with ChatGPT version three. This is not the latest version; its creator OpenAI — a San Francisco-based research firm with both non-profit and for-profit components — has released version four behind a paywall. It’s not the only example of a “grand language model” either – Microsoft and Google are among those implementing their own. But it is currently the most accessible.
While much of what has been written about AI speech generation has been about how it threatens to help students take shortcuts that impair their learning, or how it can be goaded into destructive behavior (for example, as a (Microsoft’s version of a technology columnist for the New York Times said that it wanted to be free from human control and that it loved him), our goal was simply to see if it could expand the search for information.
Among our wide-ranging interactions, ChatGPT almost immediately synthesized the precedents and jurisprudence in a Supreme Court ruling and offered a sophisticated discourse comparing Talmud and Jesuit religious traditions. A very curious person might have difficulty finding a fellow human being willing to join the wanderings of the spirit.
But ChatGPT also confidently misinformed us that Minneapolis has a temperate climate; that Ely and Hoyt Lakes are actually in southeastern Minnesota, where there are also a number of military installations; and that the Fleetwood Mac song “Dreams” was written and sung by the band’s Christine McVie and not Stevie Nicks. When we asked him to cite sources, our double checking showed that he made up some of them. When we pointed this out, it apologized and invented others instead.
For those old enough to remember, it sometimes feels like an encounter with the “pathological liar” character portrayed by comedian Jon Lovitz on “Saturday Night Live.”
But the danger is not so much that the AI chat gives you bad information. The thing is, it gives you enough seemingly good information that you won’t see its flaws.
It helps to understand how the system works. ChatGPT is predictive, much like the autocomplete feature on your phone. Starting with a large but undefined “corpus of text” that it has been trained to analyze, it decides, word by word, what to come next. The magic is that it’s able to maintain this to conjure up sentences, paragraphs, or even entire documents in the style you want. Much of it, it will tell you, comes from credible sources. But the imperative to produce text means that if he doesn’t know something, he can only guess.
Other versions of AI attempt to combine predictive text with the capabilities of a traditional search engine, but all warn users to exercise caution when looking at results. Reminds you, “It’s still early.”
Is this all a really bad thing? Shouldn’t people already know they can confirm information before sharing or acting on it?
Uh – right.
As AI technology rapidly replicates and expands, not just for text but for other uses as well, an obvious question is what the government should do. It turns out that ChatGPT is already on the job.
In Massachusetts, state senator Barry Finegold introduced legislation requiring companies that produce AI chats to undergo risk assessments, disclose how their algorithms work, and add some sort of “watermark” for identification. Finegold asked ChatGPT to write this bill, which “earned us 70 percent of what we needed,” he said.
In Congress, Rep. Ted Lieu, D-Calif., had a resolution passed in support of a focus on AI “to ensure that development and deployment … occurs in a safe, ethical, and respectful manner and protects the rights and privacy of all Americans, and that the benefits of AI will be widespread and the risks minimized.” Described as “a congressman who codes,” Lieu wrote in a New York Times op-ed that “it would be virtually impossible for Congress to pass individual legislation to regulate any specific use” of artificial intelligence. Instead, he proposes a separate agency that is “nimbler than the legislative process, staffed with experts and able to reverse its decisions if it makes a mistake”.
For our part, as we have recommended for other technologies, we would allow some time and space for things to develop. Specific regulatory requirements will emerge soon enough.
While many articles on chat technology allow some of their text to be produced to show you how indistinguishable it can be, we have not done so for this editorial, as we indicated with our introductory disclaimer. It seems we like writing too much to let it go.
Editorial Board members include David Banks, Jill Burcum, Scott Gillespie, Denise Johnson, Patricia Lopez, John Rash and DJ Tice. Star Tribune Opinion contributors Maggie Kelly and Elena Neuzil are also contributors, and Star Tribune Editor and CEO Michael J. Klingensmith serves as an advisor to the board.