The new Bing AI ChatGPT bot will be limited to five replies per chat

As regular TechRadar readers will know, the heavily touted AI chatbot enhancements recently added to Bing didn’t have the smoothest start – and now Microsoft is making some changes to improve the user experience.

In a blog post (opens in new tab) (about The Verge (opens in new tab)), Microsoft says that the optimizations “should help focus chat sessions”: The AI ​​part of Bing will be up to 50 Chat rounds limited ‘ (one question and answer) per day and five answers per chat session.

Here’s coming: Microsoft executives have previously gone on record as looking into ways to curb some of the odd behaviors noticed by early testers of the AI ​​bot service.

Put to the test

Those early testers tested pretty hard: they could get the bot, which is based on an updated version of OpenAI’s ChatGPT engine, to return inaccurate answers, get angry, and even question the nature of its own existence.

It’s not ideal if your search engine is going through an existential crisis if you’re just looking for a list of the best phones. Microsoft says very long chat sessions confuse its AI and that the “vast majority” of searches can be answered in 5 answers.

The AI ​​add-on for Bing isn’t available to everyone just yet, but Microsoft says it’s working its way through the waiting list. If you plan to try out the new functionality, remember to keep your interactions short and to the point.

Analysis: Don’t believe the hype just yet

Despite the initial problems, there is clearly a lot of potential in the AI-powered search tools under development by Microsoft and Google. Whether you’re looking for ideas for party games or places to visit, they’ll deliver quick, well-founded results — and you don’t have to wade through pages of links to find them.

READ :  US Navy Veterans Lung Cancer Advocate urges any veteran or individual now suffering from lung cancer who was exposed to asbestos at a shipyard factory prior to 1982 to call the Gori Law Firm for compensation

At the same time, of course, there is still a lot to do. Large Language Models (LLMs) like ChatGPT and Microsoft’s version of it don’t really “think” as such. They’re like supercharged autocorrect engines, predicting which words should follow each other to give a coherent and relevant response to what’s being asked of them.

Add to that the question of sourcing — with people relying on AI to tell them what the best laptops are, leaving human writers out of work, these chatbots won’t have the data they need to produce their answers. Like traditional search engines, they are still very dependent on content compiled by real people.

We, of course, took the opportunity to ask the original ChatGPT why long interactions confuse LLMs: Apparently, it can cause the AI ​​models to “focus too much on the specific details of the conversation” and cause them to “become do not generalize to other contexts or topics”. , resulting in repetitive behavior and responses that are “repetitive or irrelevant”.