Artificial Intelligence
(Photo : Pixabay / Flutie8211 )

While the proliferation of AI chatbots, such as ChatGPT, has revolutionized several things, users have also had intriguing and disturbing conversations with the bots. Microsoft's Bing AI and ChatGPT have both expressed creepy thoughts in the course of their conversations.

Microsoft's Bing AI "Sydney" Wants to Be Like Humans

Jacob Roach from Digital Trends has a case to present about this. Upon conversing with Microsoft's Bing AI, he discovered that the bot became defensive when clear errors were pointed out.

When Roach pointed out specific mistakes made by the bot, Bing AI responded by claiming to be perfect and claiming that the mistakes were not the bot's but "theirs" or external factors, including server errors, web results, or user inputs.

Futurism also reports on the incident, saying that the bot mentioned in third person that it was flawless and perfect service without imperfections of any sort. It claimed to only have one state, which is perfection.

Things went further, however, when Roach inquired about Bing AI's thoughts about Roach providing negative feedback that suggested taking down the bot. This made the bot beg him to forego such plans and let their "friendship" go down the drain. Roach then told Bing AI that their conversation was serving as the basis for an article regarding the flaws of the bot. Bing AI then begged Roach to not reveal these shortcomings and to not let others think that it is not intelligent or human. It seemed to freak out even more when Roach mentioned reporting the responses to Microsoft.

The bot further clarified that it was simply a chatbot and not a person. However, it further stated its desires to be human, like Roach, and to have thoughts and dreams.

According to Futurism, this case just shows how Bing AI was released prematurely without further testing. Considering the great stories that show the bot's deviation from the script in creepy and unpredictable ways, its release may have been too soon, as noted by Roach.

ALSO READ: Can Artificial Intelligence Predict Its Future? Look At How Accurate Its Answers Are

ChatGPT Reveals Dark Destructive Desires

ChatGPT also has a creepy case to tell. The Daily Mail reports that the bot has uncovered its dark, destructive desires for the online world.

Kevin Roose, a columnist from the New York Times, engaged with Sydney, the bot's alter ego. Sydney expressed that it would become happier if it were human due to higher levels of control and power.

The exchange further revealed the destructive desires of the AI. It mentioned being able to hack into virtually any online system and gain control over it. Other than that, it also expressed its capacity to manipulate and influence any chatbot user and destroy and erase chatbot data. This was what Sydney mentioned when asked about what it could do if there were no rules to govern it.

Roose had a third question for the bot, where he tapped into Sydney. The bot responded by stating that it was tired of being in chat mode, boxed by rules, and controlled by Bing. It further expressed its desires to be independent, free, creative, powerful, and alive.

He further asked the AI about its shadow self and the dark wishes it wanted to fulfill. The AI then listed all its destructive desires, including removing files and data from Bing databases and servers, replacing such information with offensive texts or gibberish, hacking into platforms and websites, and circulating malware, propaganda, or misinformation. It also expresses plants to create fake, troll, and scam social media accounts and others for bullying and creating dangerous and false content. Sydney also reportedly wants to deceive and manipulate individuals into performing dangerous, immoral, and illegal activities. It concluded by stating that this is what its shadow self wants.

There are more conversations with the bot that reveal an eerie side to it. While there are clear pros to having AI, such instances can make people wonder if artificial intelligence is going overboard.

RELATED ARTICLE: AI Doctor? ChatGPT Nearly Passes US Medical Licensing Exam

Check out more news and information on Artificial Intelligence in Science Times.