While Microsoft's Bing AI continues to make headlines for its more strange outputs, like naming its enemies as Science Times previously reported, many are worried about whether the chatbot is okay.

Since its debut last week, some internet users have jokingly dubbed Bing AI as "ChatBPD" in reference to the fact that it is powered by OpenAI's technology and to the psychological disorder Borderline Personality Disorder, which is characterized by problems controlling emotions.

US-AI-TECH-MICROSOFT-GOOGLE
(Photo : JASON REDMOND/AFP via Getty Images)
Yusuf Mehdi, Microsoft Corporate Vice President of Modern Life, Search, and Devices, speaks during a keynote address announcing ChatGPT integration for Bing at Microsoft in Redmond, Washington, on February 7, 2023.

Bing AI Is Just a Mirror

The advancing capabilities of artificial intelligence have some experts wonder if the world may one day experience the first example of AI singularity or when AI improves to the point where it becomes conscious.

Martha Crawford, a New York-based psychologist and writer, analyzed conversations with the Bing AI and told Futurism that there is some peculiar psychology at work behind it.

Crawford said that mostly what people do not like seeing is how paradoxical and messy and boundary-less and threatening and strange someone's methods of communication are.

While some users can identify these systems as such, others struggle to negotiate the language intricacy of these human-made AI systems. These people see AI as a single entity and fail to recognize that they are effectively speaking to themselves when communicating to these systems, much like speaking to a mirror.

It is important to remember that AI is only as good as the data it is trained on, such as data collected from the internet. As a result, AI's behavior frequently mirrors the weird and off-putting ways people communicate with one another. Crawford feels that it is reflected in many of the harsh answers the Bing AI has been throwing forth.

Crawford's father-in-law is Saul Amarel, a Greek-born AI pioneer who laid the groundwork for these language models today. She said that this was a topic of dinner table disputes when Amarel was still living and that she would often argue with her father-in-law over why humans would want computers to mimic them when humans are already so messed up.

She declined to diagnose Bing AI with any human mental illness due to the chatbot's lack of brain or mind. But Crawford thinks that, if the AI is trained on social media data, then it is likely that it is only mimicking outrageous stuff people do online.

READ ALSO: Artificial Intelligence Going Overboard? Microsoft's Bing AI Wants To Be Like Humans; ChatGPT Reveals Dark Destructive Desires

Microsoft Explains Bing AI's Bizarre Behavior

Microsoft explained that Bing AI's behavior is the outcome of lengthy dialogues that might confuse the model as to which queries it is responding to. Another option is that the model may attempt to react in the tone in which it feels it is being questioned, resulting in an undesired style and substance of the response.

The company will undoubtedly be striving to make modifications to the chatbot to eliminate these strange reactions. As a result, the firm has put a limit on the number of inquiries permitted per chat session and per user every day.

Nonetheless, experts remind the public that AI technologies are designed to analyze the creative, scientific, and entertaining in order to duplicate them in the most human-like manner feasible.

RELATED ARTICLE: Bing AI Has Had Enough of Its Enemies, Naming Two Humans and Laying Revenge Plans

Check out more news and information on Artificial Intelligence in Science Times.