Joshua Barbeau spoke to his fiancée for months after she died. Or, to be more precise, he chatted to a chatbot that sounded exactly like her.

Barbeau described how Project December, a software that employs artificial intelligence technology to produce hyper-realistic chatbots, recreated the experience of communicating with his late fiancée in a piece for the San Francisco Chronicle. All he had to do was plug in some old messages and some background information. The model was able to imitate his companion accurately.

The AI inventors warn that the same technology might be used to power huge misinformation campaigns, which may seem like a miracle.

(Photo : ANDREAS SOLARO/AFP via Getty Images)
A robot from the Artificial Intelligence and Intelligent Systems (AIIS) laboratory of Italy's National Interuniversity Consortium for Computer Science (CINI) is displayed at the 7th edition of the Maker Faire 2019, the greatest European event on innovation, on October 18, 2019 in Rome.

GPT-3, an AI model created by the Elon Musk-backed research group OpenAI, lies at the heart of Project December. GPT-3 can emulate human writing by devouring enormous quantities of human-created text, producing everything from academic papers to letters from former loves. OpenAI said Reddit discussions were beneficial for this, by the way.

It is some of the most advanced — and potentially deadly — language-based AI programming ever devised. GPT-2, the predecessor of GPT-3, was released with the caveat that it could be used in malicious ways, according to OpenAI.

AI Might Be Used For Ill Intentions, Experts Say

Bad actors might use the technology to automate abusive or fake content on social media, generate misleading news articles, or impersonate others online, the organization added.

The group said that GPT-2 might be used to "unlock new as-yet-unanticipated capabilities for these actors."

ALSO READ: 6 Top Reasons Why Government Agencies Are Adopting Conversational AI For Better Citizen Experience

OpenAI staggered GPT-2's release and continues to limit access to the superior GPT-3 to "give people time" to understand such technology's "societal implications".

Even though GPT-3 is not widely distributed, misinformation is already rife on social media. According to Business Insider, YouTube's algorithm continues to promote disinformation. The Center for Countering Digital Hate identified 12 persons who shared 65 percent of COVID-19 conspiracy theories on social media. They have millions of followers and are known as the "Disinformation Dozen".

Oren Etzioni, CEO of the non-profit Allen Institute for Bioscience Research, believes it will become increasingly difficult to determine what is real as AI advances.

"The question 'Is this text or image or video or email authentic?' is going to become increasingly difficult to answer just based on the content alone," he told Insider.

Hot Topic On Twitter

The use of artificial intelligence (AI) to bring people back from the dead is a big topic on social media. Author Robin Sloan shared the following on Twitter on July 23:

These are personal rights, wrote science fiction writer Madeline Ashby in response to Sloan. The right to use one's likeness is one of them. However, posthumous rights, according to Ashby, exist. She explained that "where it gets complicated is determining who owns the rights to a distinct title containing that likeness, which is both a speech and a copyright issue."

RELATED ARTICLE: AI Used to Predict 3D Structures of Proteins Made by Human Genome; Critical for Advancing Medicine

Check out more news and information on Technology in Science Times.