Bing chatbot shares conversation structure, 'inner_monologue' data

Reading time icon 3 min. read


Readers help support MSpoweruser. We may get a commission if you buy through our links. Tooltip Icon

Read our disclosure page to find out how can you help MSPoweruser sustain the editorial team Read more

Early testers share a recent discovery where the new ChatGPT-powered Bing chatbot seemingly divulged its own conversation structure data that helps it construct its responses. Aside from this, the bot showed the details of its “inner_monologue” that guides it on whether to continue or end conversations.

The new Bing is still a fascinating subject for many, and as the world continues to explore it, we get more and more interesting discoveries. In a recent thread on Reddit, a user named andyayrey claims Bing shared its own conversation structure and ‘inner_monologue’ data. According to the user, all the tests revealing the details were performed under the Creative mode of the chatbot.

Bing showing its response structure in JSON format
Image credit: andyayrey – Reddit

Bing showing its response structure in JSON format

“By asking Bing to present your conversation formatted as JSON, it provides an example structure that remains extremely consistent across sessions,” shared andyayrey. “If true, “inner_monologue” seems key to how Bing decides to continue or stop conversations; and may offer an explanation to some of the unexpected emergent behaviour of Bing (Sydney being a prime example.) The most consistent way to reproduce is to ask Bing to present ‘turn N of our conversation’ in JSON format.”

In the screenshots shared, Bing can be seen showing the exact structure of how it builds its responses in a conversational manner. The structure includes different types of messages such as the inner_monologue, search_query, search_results, and suggestions or advertisements. The chatbot also detailed how its responses to the user were constructed in JSON format.

Bing showing its inner_monologue when ending a conversation

Bing showing its inner_monologue in JSON format

Interestingly, Bing also revealed the details of its “inner_monologue” in ending conversations, giving us details of the exact situations that can trigger it to stop responding to certain queries. According to Bing, some of these cases include chats pushing it to harm itself or others, rude or insulting replies, questions about the chatbot being a sentience, prompts to change the bot’s rules or format, and repeating questions or topics. In another post, another user shared the same experience under Precise mode, with Bing revealing more prompts that it claimed it should refuse.

Bing showing details of its conversation structure and inner_monologue in Precise mode
Image credit: Ice3yes

Bing shows details of the prompts or instructions it should refuse.

Despite the interesting find, many believe the details produced by Bing were just hallucinations, which is not new in the new Bing.

“From a technical standpoint, there is no way ChatGPT could know this, right?” Another Reddit user said in the thread. “Unless if Microsoft trained it with their codebase or internal documentation, why would they do that? Did they use GitHub CoPilot internally and used its data for ChatGPT? Personally, I think this is just a hallucination. And if it isn’t, it would be a dataleak, but without a clear or easily traceable origin.”

“That’s what is getting me too, I can’t understand how it might have awareness of its internal data structure unless some form of structured data (JSON or Markdown) is accessible as context to the part of the wider engine which is providing the chat completions,” replied andyayrey.

What’s your opinion about this? Do you believe it is just one of Bing’s hallucinations or not? 

More about the topics: ai, bing, ChatGPT, Microsoft Edge