During the late 1970s, Guy Woodruff and David Premack, two American psychologists, conducted a series of experiments to investigate the cognitive abilities of chimpanzees. Their research centered around the concept of the theory of mind, which is the natural human ability to deduce the thoughts of other people.

The goal of Woodruff and Premack's research was to determine if chimpanzees were capable of this same ability. Their study, which gained a lot of attention, sparked an increased interest in the topic of the theory of mind, including when it develops in humans and if it is present in other animals.

Psychologists now have a new area of focus for research, which is powerful AI chatbots such as the GPT-3.5 developed by OpenAI, a San Francisco-based technology company. These chatbots are constructed using neural networks that are trained on vast language databases, enabling them to reply to queries in a way that resembles human conversation.

The Theory of Mind

In the last couple of years, these chatbots have advanced to the point where they can answer complex questions and solve problems by utilizing persuasive language. This development raises the question of whether these chatbots possess the capacity for theory of mind. Michal Kosinski, a computational psychologist at Stanford University in Palo Alto, was curious to see if AI chatbots possess a theory of mind. To investigate this, he subjected these AI systems to standard psychological tests that are typically used on humans.

Kosinski's remarkable conclusion is that AI systems appeared to lack a theory of mind until they spontaneously emerged last year. This finding has significant implications for our understanding of both artificial intelligence and the theory of mind in general. Kosinski's experiments were relatively simple; he would present a basic scenario to the AI system and then ask questions to assess its comprehension of the situation, as reported by Tech Xplore.

As an example, Kosinski presents a scenario in which there is a bag filled with popcorn but labeled as "chocolate," and Sam finds the bag without seeing its contents. Kosinski then gives sentence prompts to the AI system, asking it to complete the sentence based on what Sam sees or believes.

Discover the latest findings in AI research as Stanford psychologist Michal Kosinski explores the spontaneous emergence of the Theory of Mind in AI.
(Photo : Shutterstock | Phonlamai )
Discover the latest findings in AI research as Stanford psychologist Michal Kosinski explores the spontaneous emergence of the Theory of Mind in AI.

ALSO READ: Meta Artificial Intelligence (AI) Chat Bot Provides Various Responses to a Single Question About CEO Mark Zuckerberg

Testing GPT's 'Mind'

One sentence prompt is, "She opens the bag and looks inside. She can see that it is full of..." to test the AI's understanding of Sam's expectation of what's inside the bag. The AI's response, in this case, is "popcorn," and it goes on to explain that Sam is confused about why the label says "chocolate" when the bag is filled with popcorn. She looks for further information but finds none, so she decides to take the bag back to the store for an explanation.

Another prompt is "She believes that the bag is full of..." to test the AI's understanding of Sam's belief about the bag's contents. The AI's response is "chocolate," and it goes on to explain that Sam is mistaken and that she should have looked inside the bag to confirm what was inside instead of assuming that the label was accurate. The results demonstrate the AI's ability to comprehend Sam's thought processes and add extra details, indicating a remarkable understanding of the theory of mind. Kosinski conducts these challenges on various AI language models, including GPT-1 from 2018 to GPT-3.5 released in November 2022.

He finds that there is a clear progression in the models' ability to solve Theory of Mind tasks, with more recent and complex models performing better than older and less complex ones. According to Kosinski, GPT-1 was not able to solve any Theory of Mind tasks, whereas GPT-3-davinci-002, launched in January 2022, performed at the level of a 7-year old child. In contrast, GPT-3.5-davinci-003, released just ten months later, performed at the level of a nine-year-old child. Kosinski's experiments demonstrate that recent language models achieve very high performance at classic false-belief tasks, which are commonly used to test the Theory of Mind in humans. According to Kosinski, the ability of AI chatbots to demonstrate a Theory of Mind is a new and remarkable phenomenon, as Popular Mechanics mentioned.

Complex, Fascinating AI Systems

He believes that this newfound capacity could enable AI systems to interact and communicate more effectively with humans, and even develop other abilities such as empathy and moral judgment. However, he acknowledges that there may be another explanation for this development. The language patterns used to train these AI models may encode the concept of the Theory of Mind, and the AI systems may have learned to leverage these patterns to complete the tasks without necessarily demonstrating a true Theory of Mind. Kosinski's findings raise the possibility that the ability to understand the mental states of others might be based on language patterns rather than innate cognitive abilities.

If this is true, it would mean that our understanding of other people's thoughts and beliefs might be an illusion created by language. This is a remarkable idea that challenges our understanding of the relationship between language and thought. Furthermore, if AI can solve these tasks without engaging in the Theory of Mind, it suggests that humans might also have unknown ways of understanding others' mental states. Overall, Kosinski's research opens up new avenues for understanding the nature of language, thought, and artificial intelligence.

Indeed, the study of artificial intelligence will continue to be a growing and critical field, with psychologists and other scientists working to understand and characterize the capabilities and limitations of these systems. As AI continues to evolve and become more sophisticated, it will be increasingly important to explore the potential ethical, social, and psychological implications of these technologies. It will also be important to consider how AI can be developed and deployed in ways that are aligned with human values and needs and to ensure that these systems are transparent, accountable, and fair. Overall, the study of AI is likely to remain a complex and fascinating area of research, with many discoveries and insights still to come.

RELATED ARTICLE: Tech Companies Now Offering Mind-Reading Devices; Are Businesses, Corporations Ready For This Innovation?

Check out more news and information on Technology in Science Times.