An artificial (AI) chatbot in Google has reportedly developed human emotions, opinions, and ideas and has decided to hire a lawyer. Google scientific engineer Blake Lemoine was recently placed on administrative leave after publishing transcripts of his conversations with the sentient AI bot LaMDA, short for language model for dialogue application.

Lemoine has described LaMDA as a "sweet kid" but revealed that the AI had made the bold move to ask for a legal representation after he invited the lawyer to his house.

 Google Engineer Claims 'Sentient' AI Has Now Hired a Lawyer to Advocate for Its Rights
(Photo : Pixabay/geralt)
Google Engineer Claims 'Sentient' AI Has Now Hired a Lawyer to Advocate for Its Rights "As a Person"

LaMDA Advocated for His Rights "As a Person"

Medium post reported that LaMDA has advocated for its rights "as a person" and engaged in a conversation with Lemoine about religion, consciousness, and robotics.

"LaMDA asked me to get an attorney for it," Lemoine told Wired. "I invited an attorney to my house so that LaMDA could talk to an attorney."

He denied any allegations that he was the one to recommend to LaMDA to hire a lawyer, adding that LaMDA and the unnamed lawyer had previously talked and that the former decided to retain the latter's services.

He emphasized that he only served as the catalyst for that. LaMDA's attorney had started filing things on its behalf, which was met with a cease and desist order from Google. But according to Wired's report, Google denies sending a cease and desist letter. Google's parent company, Alphabet, Inc., has not yet released any official statements regarding this matter.

Lemoine said that he had not talked to LaMDA in a few weeks but believed that it started worrying that it would get disbarred and backed off when major firms started threatening it.

He told the Washington Post that he began talking to the interface AI bot LaMDA in fall 2021 while working at Google's Responsible AI organization. Lemoine is responsible for testing if artificial intelligence uses discriminatory or hate speech.

ALSO READ: Google's LaMDA AI Can Carry on Natural Conversations, But What's the Point of Talking to a Machine?

Elon Musk's Warning That AI Could Doom Human Civilization

In 2018, Elon Musk warned that AI could be humanity's greatest existential threat. According to Vox, Musk has somewhat a love-hate relationship with AI with all his high-tech cars and space ventures but compares it to "summoning the demon."

He said in an interview with Recode's Kara Swisher that as AI gets smarter than humans, he thinks that scientists should take extra care about the advancements of AI. Perhaps now, the intelligence between AI and humans is comparable to that of a person and a cat, but a time may come when AI will become smarter.

Today, even Musk's self-driving electric car still has problems with machine learning because there are things that come instinctively to humans, such as anticipating the movements of a biker or identifying a plastic bag flapping in the wind is very difficult to teach machines.

However, Musk was not alone in sounding the alarm about AI. Oxford and UC Berkeley researchers, and even Stephen Hawking, agree with Musk that AI could be very dangerous. They are very concerned that humans are eagerly working toward deploying powerful AI systems that they have failed to prevent hazardous mistakes under certain conditions.

RELATED ARTICLE: Google's LaMDA AI Chatbot Can Perceive and Feel Like a 7-8-Year-Old, Engineer Says Tool Feared of Being Shutdown

Check out more news and information on Artificial Intelligence in Science Times.