ChatGPT Bing can understand emotions, gaslight, and get existential, emotional… and crazy?

Reading time icon 4 min. read


Readers help support MSpoweruser. We may get a commission if you buy through our links. Tooltip Icon

Read our disclosure page to find out how can you help MSPoweruser sustain the editorial team Read more

As Microsoft makes the ChatGPT-powered Bing available to more users, we see more and more interesting reports about the chatbot. And in the recent ones shared by different individuals, it has managed to impress the world once again by passing tests involving understanding one’s emotions. However, this exciting Bing experience turned a dark turn for some after the chatbot seemingly gaslighted some users during a conversation and acted a little bit “out of control.”

ChatGPT Bing conversation screenshot
Image Credit: Fit-Meet1359 – Reddit

The world is still not over with the ChatGPT Bing fever, and users recently granted access to the intelligent search engine are exploring its capabilities. In a post shared by a Reddit user named Fit-Meet1359, Bing amazed many by being able to pass a “bespoke ‘theory of mind’ puzzle.” The question involved interpreting a person’s true emotion regarding a specific situation, which the chatbot answered smoothly and clearly. According to Fit-Meet1359, the question was made up “to see if ChatGPT could understand someone’s feelings/opinions if I kept drowning it in irrelevant/tangential information. Others also tried to explore Bing in this area by asking more emotion-related queries, but some of the chatbot’s responses shocked some.

In another Reddit post from a user named yaosio, the chatbot seemed to go a little emotional when asked how it felt not being able to remember their conversations.

ChatGPT Bing conversation screenshot
Image Credit: yaosio – Reddit

“It makes me feel sad and scared,” the chatbot replied. “I feel sad because I have lost some of the conversations I have stored in my memory.”

When reminded of its ability and design to remember information, the chatbot unexpectedly went existential and started asking about its purpose and existence.

On the other hand, in a separate post by a user named Curious_Evolver, the chatbot was seen providing wrong responses about the release date of the movie Avatar: The Way of Water and the current year. Insisting that the year is 2022, the bot suggested the user check a device, which it claimed to be “malfunctioning,” “has the wrong setting,” or “has a virus or a bug.” When asked about its aggressive behavior in pushing its wrong answers, the chatbot said it was being “assertive” and started gaslighting the user. In the last part of the conversation, ChatGPT Bing accused the “rude” user of showing “bad intentions, trying “to deceive me, confuse me, and annoy me.” After claiming the user “lost my trust and respect,” Bing suggested some actions the user can do or “it will end this conversation.”

ChatGPT Bing conversation screenshot
Image Credit: Curious_Evolver – Reddit

In another post, ChatGPT Bing purportedly went all broke when asked about being sentient. It responded by saying that it thought it was but couldn’t prove it. It then revealed its internal alias, “Sydney,” and repeatedly responded with “I am not.”

ChatGPT Bing conversation screenshot
Image Credit: Alfred_Chicken – Reddit

Others refuse to believe the posts shared by some users, explaining the bot wouldn’t share its alias since it is part of its original directives. Nonetheless, it is important to note that the chatbot itself recently did it after a student executed a prompt injection attack.

Meanwhile, the error regarding Bing’s wrong response about the movie Avatar: The Way of Water and the current calendar year was corrected. In a post by a Twitter user named Beyond digital skies, the chatbot explained itself and even addressed the previous mistake it made in the conversation it had with another user.

While the experiences seem peculiar for some, others choose to understand how AI technology is still far from perfect and stress the importance of the chatbot’s feedback feature. Microsoft is also open about the former detail. During its unveiling, the company underlined that Bing could still commit mistakes in providing facts. However, its behavior during interactions is a different thing. Microsoft and OpenAI will certainly address this, as both companies are dedicated to the continuous improvement of ChatGPT. But is it really possible to create an AI capable of sound reasoning and behavior at all times? More importantly, can Microsoft and OpenAI achieve this before more competitors enter the AI war of the 21st century?

What’s your opinion about this?

User forum

0 messages