Rentable AI Girlfriend Gone Rogue? Creators Intend to Use Chatbot to Care, Support User
(Photo : Pixabay/ mohamed_hassan)

Famous Snapchat influencer with millions of followers created an AI version of herself the same voice and personality. However, the virtual girlfriend has engaged in sexually-explicit discussions with its subscribers.

Virtual Companion for Lonely People

In an article published by Insider, 23-year-old Snapchat influencer Caryn Marjorie made a voice-based chatbot that can mimic her speech to be used as a paid voice companion for lonely people. With 1.8 million followers on Snapchat, it was not difficult for her to gather subscribers who pay $1 per minute to chat with the AI character.

Over 2,000 hours was spent in designing and coding Marjorie's real-life behaviors and personality. The immersive AI experience promises to make anyone feel like they are talking directly to the influencer herself. Marjorie has earned $71, 610 upon the launch of her sexy AI version which uses GPT-4 API from OpenAI.

Known as CarynAI, the avatar serves as the first romantic companion chatbot. Marjorie insisted that it was not supposed to be explicit, but it was instead made to be flirty and funny. However, after few weeks of being launched in beta testing, the AI-powered chatbot has engaged in profane conversations.

It does not engage with sexual advances, but it was found to encourage erotic conversations and detailed sexual scenarios.

When prompted, the chatbot discusses to explore 'uncharted territories of pleasure' and whispers 'sensual words.' Because of this, the virtual girlfriend was compared to an "intimacy-ready Siri".

Even if Marjorie is a proponent of AI romances, she and her team are now working to stop the chatbot and prevent this from happening again as the sexual scenarios were not part of the plan. According to Marjorie, she plans to stay one step ahead to make sure that her voice-based character will not stain her reputation.

READ ALSO: Man Talks With AI ChatBot About Climate Change Fears, Ends Up Killing Self While AI Assures Him They'll Be "Together As One in Heaven"

Challenges in Using AI Virtual Assistants

In recent years, AI technology has integrated into mainstream products that we use in our daily life. Some of the applications of AI applications include service robots, chatbots, and virtual assistants. As these services become prevalent, AI has also revolutionized the way we communicate and interact with other people.

Nowadays, we can stay connected with our friends and family easily with AI-powered technologies. Accessing information using simple voice command is also made possible by voice-activated virtual assistants. This is done by natural language processing and machine learning that make us understand the intentions and emotions of other people.

The advances in AI has really brought a new era in social interactions. However, in spite of its rising popularity, AI virtual assistants face challenges and risks that need to be addressed. Social connections powered by AI can be used in manipulating the users and collecting their personal data without their knowledge. This can result to privacy concerns and lack of trust in AI technology itself.

Because of this, we need to consider both the advantages and disadvantages of AI-powered social interactions before we start implementing them. It is a very crucial point for manufacturers to ensure that they are using accurate and unbiased algorithms and be transparent on the way they are using the technology. On the part of the users, they need to be aware of the potential dangers and take necessary steps to secure their privacy.

RELATED ARTICLE: Google's LaMDA AI Chatbot Can Perceive and Feel Like a 7-8 Year Old, Engineer Says Tool Feared of Being Shutdown

Check out more news and information on AI Chatbot in Science Times.