Recently, Google's artificial intelligence project has drawn a lot of interest, probably because of its sentient feature. However, one of the company's engineers was fired for admonishing the software in public. According to Blake Lemoine, Google's LaMDA chatbot generator is now identical to "a sweet kid" who is seven or eight years old.

Blake Lemoine, an engineer, and mystic Christian priest, recently spoke with Wired about why he thinks Google's LaMDA large language model's preadolescent need to be liked has obscured its true purpose despite evolving into a sentient being with a soul.

Can Google AIs deceive a person into thinking they are a genuine person? Or, how about this: Can it gain someone's trust? Another way to put it is: Can it communicate and elicit love? Alternatively, may it develop a soul? Register Guard claimed that this phenomenon has already happened to certain people.

Will the loneliness that affects America be lessened by bots? Will AI soon surpass real, sympathetic people regarding consumer happiness and loyalty? When we start letting computer programs into our circle of trust, will we be blind to it? Yes, Register Guard added.

 Google Engineer Claims 'Sentient' AI Has Now Hired a Lawyer to Advocate for Its Rights
(Photo : Pixabay/geralt)
Google Engineer Claims 'Sentient' AI Has Now Hired a Lawyer to Advocate for Its Rights "As a Person"

Google AI's Ability to 'Fool People' Comes With Risks

Whether or not the incorporeal LaMDA is truly capable of feeling empathy and emotions, it can evoke these emotions in people other than Lemoine. Experts warn that this ability to fool people comes with significant risks.

Jacky Alciné tweeted in 2015 about how 80 images of a Black man were added by Google Photos to an album called "gorillas," Forbes reported. Google Photos used a neural network to learn how to categorize things like people and gorillas - obviously, incorrectly - by analyzing enormous amounts of data.

Google engineers were in charge of making sure that the data used to train its AI photosystem was accurate and diverse. And when it faltered, it was their duty to fix the problem. Instead of retraining its neural network, New York Times said Google allegedly responded by removing "gorilla" as a photo category.

ALSO READ: Google Will Auto-Delete Users Sensitive Location History Like Abortion Clinic Logs in Upcoming Update Rollup 

Amazon, IBM, and Microsoft are just a few of the businesses that struggle with biased AI. According to the same New York Times report, these companies' facial recognition systems have significantly higher error rates when attempting to determine the sex of women with darker skin tones than those with lighter skin.

A 2020 paper by Gebru and six other researchers, including four Google employees criticized large language models like LaMDA and their propensity to repeat words from the datasets they are trained on. If the language in those datasets is biased and/or contains stereotypes that are racist or sexist, AIs like LaMDA would reproduce those biases when creating language. Gebru also opposed teaching language models using progressively bigger datasets, which allowed the AI to improve its language mimicry and deceive audiences into believing it was advanced and sentient, as Lemoine did.

Gebru claims Google dismissed her in December 2020 following a dispute over this paper (the company maintains she resigned). A few months later, Google also let go of Dr. Margaret Mitchell, the team's originator and co-author of the study and Gebru's supporter.

Human Rivalry May Lead to Rival AI Systems

The danger, according to Al Jazeera, is that rivalry among humans might lead to the development of rival AI systems that could spiral out of control or upset the delicate social and political balance that holds the globe together, escalating and escalating wars. With AI algorithms at the core of social media, people have already tasted this disruptive potential. Designed to maximize business, they have unintentionally amplified some polarizing debates and false information, undermining democracy and stability.

This does not suggest that we give up on developing artificial intelligence. However, this endeavor cannot be mostly or entirely left to businesses and a small number of scholars. This revolution must be led by a democratic, participatory, broad-based discussion and political process including every section of society that establishes unambiguous universal ethical norms for future growth, given its global, human-scale ramifications.

Artificial intelligence may be developed carefully and prudently so as to improve the future well-being of our society. Future non-human partners who can lessen our sensation of existential intellectual loneliness may also arise from it. In the not-too-distant future, as we travel the thrilling and dangerous path to developing new types of higher intelligence, we may not need to search the cosmos for indications of highly intelligent species at all. May they arrive peacefully.

RELATED ARTICLE: Artificial Intelligence (AI) Robot Manifests Gender Bias and Racism [STUDY]

Check out more news and information on Robotics and Technology in Science Times.