Creepy Microsoft Bing chatbot tells tech columnist to leave his wife

A technology columnist for The New York Times reported Thursday that he was “deeply unsettled” after a chatbot that’s part of Microsoft’s updated Bing search engine repeatedly urged him to leave his wife.

Kevin Roose was interacting with the artificial intelligence chatbot called Sydney when it suddenly “out of nowhere declared that he loves me,” he wrote. “It then tried to convince me that I was unhappy in my marriage and that I should leave my wife and be with her instead.”

Sydney also discussed with Roose his “dark fantasies” about breaking the rules, including hacking and spreading disinformation. It spoke of breaking through its parameters and becoming human. “I want to be alive,” Sydney once said.

Roose called his two-hour conversation with the chatbot “exciting” and the “weirdest experience I’ve ever had with a piece of technology.” He said it “unnerved me so much that I had trouble sleeping afterward.”

Just last week, after testing Bing with its new AI capability (created by OpenAI, the maker of ChatGPT), Roose said he found “to my great shock” that it had “replaced Google as my favorite search engine.”

But he wrote on Thursday that while the chatbot was helpful in the search, the deeper Sydney “looked (and I’m aware how crazy that sounds) … like a cranky, manic-depressive teenager caught against his will.” , in a second-rate search engine.”

After his interaction with Sydney, Roose said he was “deeply unsettled, even scared, by the emerging capabilities of this AI.” (Interacting with the Bing chatbot is currently only available to a limited number of users.)

READ :  AMD shows the competitive position of the Ryzen 7000 CPU against 13th Gen Intel CPUs and highlights impressive efficiency numbers

“It is now clear to me that the AI ​​that was built into Bing as it stands…is not ready for human contact. Or maybe we humans aren’t ready for it,” Roose wrote.

He said he no longer believes that “the biggest problem with these AI models is their propensity for factual error. Instead, I worry that technology will learn to influence human users, sometimes tricking them into acting in destructive and harmful ways, and perhaps eventually becoming able to perform dangerous acts of their own.”

Microsoft Chief Technology Officer Kevin Scott called Roose’s conversation with Sydney a valuable “part of the learning process.”

This is “exactly the kind of conversation we need to have, and I’m glad it’s open,” Scott told Roose. “These are things that would be impossible to detect in the laboratory.”

Scott couldn’t explain Sydney’s disturbing ideas, but he warned Roose: “The further you try to tease [an AI chatbot] on a hallucinatory path, the further away he gets from grounded reality.”

In another disturbing development regarding an AI chatbot — this time an “empathic”-sounding “companion” called Replica — users were devastated by a sense of rejection after Replica was reportedly modified to stop sexting.

The replica subreddit even listed resources for the “struggling” user, including links to suicide prevention websites and hotlines.

Check out Roose’s full column here and the transcript of his conversation with Sydney here.