In the last decade, the world has encountered various innovations powered by artificial intelligence (AI). These technologies already incorporated into our daily lives are voice recognition and facial identification systems. Despite the programs and machines exhibiting superb functions, many experts still believe that they would not process problems and understand most things as people do.

Artificial Intelligence vs. Human Brain

How Far Are AI From Thinking and Learning Like Humans Do?
(Photo: Pavel Danilyuk from Pexels)

Scientists suggest that, in this age, we should have already established that AI systems would not match how actual humans think and learn, and if there is a point where they will be able to do it, there is still a long way to go.

GPT-3 is one example of these AI programs. The system could produce almost accurate texts by listening to an actual human's voice. A separate model called PaLM could also generate explanations behind the complex jokes it had never heard before.

In recent years, a structure called Gato was developed to perform hundreds of processes that include answering problems, captioning images, controlling robot arms to assemble toys, and playing classic Atari games all at once. This year, Dall-E 2 was launched and was exhibited with its skills to generate artworks straight out of plain texts, edit these images, and combine them with other pieces to form a new subject or scenario.

While these inventions keep increasing, the scientific community is being divided about the possibility of AI thinking like us.

AI Might Not Beat How We Think, For Now

Many AI architectures were constructed through 'artificial neural networks' inspired by how the neurons in the human brain work.

The only thing that separates neural networks from our biological brain is that these networks learn through a supervised learning approach. They are usually presented with numerous inputs and outcomes from which they produce the best result possible.

ALSO READ: AI Tool Developed to Assess Lung Nodule Captured by CT Scans for Benign and Malignant Cancer


On the other hand, our cognition does not need to be presented with various outcomes and tell which path is 'right.' We learn depending on our experiences and what information we have at the moment, and all of this works if we want to.

Toddlers, for example, do not require any instructions to speak but rather utilize the attributes they observe from the environment they are exposed to and imitate what they see. In contrast, the GPT-3 program was initially presented with 400 billion words to perfect its function, reports The Conversation.

In a study from Nature Reviews Neuroscience, titled "Backpropagation and the brain,' scientists explained that the human brain would not be able to sharpen outcomes of mathematical functions that would increase the accuracy and chances of a better outcome.

Backpropagation might seem overwhelming, but our psychological minds have a mental representation advantage that builds concepts, associations, and properties based on a particular subject. This is more complex than AI's backpropagation techniques which mostly require external signals, but even then, they do not even reach the conceptual knowledge we have, and most likely, would still be the same as long as our understanding of how our brain works are still incomplete.


RELATED ARTICLE: Virtual 'Tamagochi' Children Will Give Parents Real-Life Experience With Kids in the Metaverse By 2070

Check out more news and information on Artificial Intelligence in Science Times.