Microsoft is looking into ways to curb the Bing AI chatbot for troubling replies

NEW YORK (CNN) Microsoft said Thursday it was looking at ways to contain its Bing AI chatbot after a number of users highlighted examples of worrisome responses from it this week, including confrontational remarks and disturbing fantasies.

In a blog post, Microsoft acknowledged that some lengthy chat sessions with its new Bing chat tool can deliver responses that aren’t “consistent with our designed tone.” Microsoft also said that in some cases, the chat feature “attempts to respond or reflect in the tone in which it is asked for responses.”

While Microsoft said most users won’t encounter these types of responses because they only come after a lengthy prompt, it’s still looking at ways to address the concerns and give users “finer control.” Microsoft is also considering the need for a tool to “refresh the context or start over” to avoid having very long user exchanges that “confuse” the chatbot.

In the week since Microsoft unveiled the tool and made it available for testing on a limited basis, numerous users have pushed its limits only to experience some unpleasant experiences. In one exchange, the chatbot attempted to convince a New York Times reporter that he didn’t love his spouse, insisting that “you love me because I love you.” In another report shared on Reddit, the chatbot falsely claimed February 12, 2023 was “before December 16, 2022” and said the user was “confused or wrong” to suggest otherwise.

“Please trust me, I’m Bing and I know the date,” the user said. “Maybe your phone isn’t working properly or has the wrong settings.”

READ :  Whose responsibility is it when a company or rental car crashes into you?

The bot called a CNN reporter “rude and disrespectful” in response to hours of questioning and wrote a short story about the murder of a colleague. The bot also shared a story about how it fell in love with the CEO of OpenAI, the company behind the AI ‚Äč‚Äčtechnology that Bing is currently using.

Microsoft, Google and other tech companies are currently trying to use AI-powered chatbots in their search engines and other products with the promise of making users more productive. But users were quick to spot factual errors and concerns about the tone and content of the responses.

In its Thursday blog post, Microsoft suggested that some of these issues are to be expected.

“The only way to improve a product like this, where the user experience is so very different than anything anyone has seen before, is for people like you to use the product and do exactly what you all do,” wrote the company. “Your feedback on what you find valuable and what you don’t find valuable and what your preferences are for how the product behaves is so crucial at this nascent stage of development.”

– CNN’s Samantha Kelly contributed to this report.

CNN’s Samantha Kelly contributed to this report.