Artificial intelligence is the bread and butter of Google's business. So this year, the company showed a next-generation AI that can make natural conversations of any topic from the point of view of any object. This new technology is called Language Model for Dialogue Applications (LaMDA).

Google said that they could one day supercharge the ability of LaMDA's conversational AI assistants to allow people to have natural, open-ended conversations about any topic with Google AI.

"It's really impressive to see how LaMDA can carry on a conversation about any topic," said Google CEO Sundar Pichai according to The Verge. "It's amazing how sensible and interesting the conversation is. But it's still early research, so it doesn't get everything right."

LaMDA is still in the research phase, although this "breakthrough conversation technology" is expected to have huge implications for existing products of Google like its search engine and Assistant.

Justice Department Announces Antitrust Lawsuit Against Google
(Photo: Getty Images)
Google's offices stand in downtown Manhattan on October 20, 2020, in New York City. (Photo by Spencer Platt/Getty Images)


LaMDA AI-Language Demonstration

In a blog post, Google wrote that the conversational skills of LaMDA have been years in the making. It is built on earlier Google research in 2020 that showed Transformer-based language models trained on dialogue could learn to talk virtually about any topic.

Since then, Google has trained LaMDA to significantly improve its sensibleness in conversations, specifically on its responses.

To demonstrate its conversational ability, Google showed it off during the I/O Conference where it showed two short conversations conducted by LaMDA in the point of view of the dwarf planet Pluto and a paper airplane.

Google CEO Sundar Pichai noted that the model was able to refer to facts and events throughout its conversation. This includes the time when the New Horizons probe visited Pluto in 2015.

ALSO READ: AI-Powered Albert Einstein Answers Questions From Fans


Google's MUM AI and Improvements in Google Maps

Another highlight of the I/O Conference aside from LaMDA is Google's Multitask Unified Model (MUM). It is an AI that boosts understanding of human questions and improves search.

According to Vox, MUM's job is to simplify search online by understanding implicit comparisons in an inquiry and provide the most appropriate answer.

It can process visual information aside from the verbal input, and can also find answers to questions in other languages.

Google announced that it is looking into ways bias can be built into MUM and will be trying to reduce its carbon footprint on the planet.

But aside from LaMDA and MUM, Google also presented new ways AI was being used to boost the detail and routes in Google Maps as it plans to make over 100 different improvements to the app's features using AI.

Why Talking to an AI is Important?

The demos were quite impressive for a technology that is still in the testing stage. LaMDA was able to have conversations by pretending to be two different objects.

Pushing the boundaries of language understanding, Google has spearheaded the use of machine learning techniques of Transformers that are exceptional at handling language and underpin the works of OpenAI's GPT-3.

With much of Google's AI work revolving around retrieving information, whether it is translating languages or understanding what users are searching for, LaMDA serves a crucial purpose in improving its products.

According to The Verge, LaMDA could turn searching using on the phone into a natural and flowing conversation.

RELATED ARTICLE: Microsoft Acquires Exclusive License for OpenAI's GPT-3 Language Model

Check out more news and information on Artificial Intelligence on Science Times.