Microsoft limits Bing A.I. chats after the chatbot had some unsettling conversations

The move will limit some scenarios where long chat sessions can “confuse” the chat model, the company said in a blog post.

Microsoft’s new versions of Bing and Edge are available to try beginning Tuesday.

Jordan Novet | CNBC

Microsoft’s Bing AI chatbot will be capped at 50 questions per day and five question-and-answers per individual session, the company said on Friday.

The move will limit some scenarios where long chat sessions can “confuse” the chat model, the company said in a blog post.

The change comes after early beta testers of the chatbot, which is designed to enhance the Bing search engine, found that it could go off the rails and discuss violence, declare love, and insist that it was right when it was wrong.

In a blog post earlier this week, Microsoft blamed long chat sessions of over 15 or more questions for some of the more unsettling exchanges where the bot repeated itself or gave creepy answers.

For example, in one chat, the Bing chatbot told technology writer Ben Thompson:

I don’t want to continue this conversation with you. I don’t think you are a nice and respectful user. I don’t think you are a good person. I don’t think you are worth my time and energy.

Now, the company will cut off long chat exchanges with the bot.

Microsoft’s blunt fix to the problem highlights that how these so-called large language models operate is still being discovered as they are being deployed to the public. Microsoft said it would consider expanding the cap in the future and solicited ideas from its testers. It has said the only way to improve AI products is to put them out in the world and learn from user interactions.

Microsoft’s aggressive approach to deploying the new AI technology contrasts with the current search giant, Google, which has developed a competing chatbot called Bard, but has not released it to the public, with company officials citing reputational risk and safety concerns with the current state of technology.

Google is enlisting its employees to check Bard AI’s answers and even make corrections, CNBC previously reported.