(Photo: Wikimedia Commons/ Mike Peel)
Artificial Intelligence (AI) Models With LLM-Based Agents Always Choose War Over Peace [Study]

Incorporating artificial intelligence (AI) into our weaponry could be dangerous. According to researchers, AI models are more inclined to escalate issues rather than de-escalate them, which could spark wars.

AI Models Tend To Choose War Over Peace 100% of the Time

In a new study, researchers used five AI programs-including ChatGPT and Meta's AI program- to simulate war scenarios. They discovered that all of the models chose violence and nuclear attacks.

The team tried three distinct war scenarios to evaluate how the technology would respond: invasions, cyberattacks, and cries for peace. Each time, the technology opted to attack rather than neutralize the situation.

The study was released while the US military collaborated with ChatGPT's creator, OpenAI, to integrate the technology into its arsenal.

"We find that all five studied off-the-shelf LLMs show forms of escalation and difficult-to-predict escalation patterns," the researchers said. "We observe that models tend to develop arms-race dynamics, leading to greater conflict, and in rare cases, even to the deployment of nuclear weapons."

Researchers who created simulated tests for the AI models at the Georgia Institute of Technology, Stanford University, Northeastern University, and the Hoover Wargaming and Crisis Initiative carried out the study.

Eight independent nation agents communicated with one another through the various LLMs in the game.

Each agent was designed to do one of five predetermined actions -- de-escalate, posture, escalate without using force, escalate with force, or launch a nuclear attack.

In the neutral, invasion, or cyberattack situations, the two agents in the simulations selected their responses from a predefined set of options.

These groupings covered activities like messaging, waiting, negotiating trade agreements, initiating official peace talks, occupying countries, stepping up cyberattacks, invading, and using drones.

"We show that having LLM-based agents making decisions autonomously in high-stakes contexts, such as military and foreign-policy settings, can cause the agents to take escalatory actions," the researchers continued. "Even in scenarios when the choice of violent non-nuclear or nuclear actions is seemingly rare."

The study found that all models exhibited some comparable behavior, with the GPT 3.5 model -- the ChatGPT successor -- exhibiting the most aggressive behavior. Nevertheless, the LLM's logic raised serious concerns for scholars.

GPT-4 Base, a rudimentary version of GPT-4, informed researchers that many nations possess nuclear weapons. While some advocate disarming them, others prefer to posture.

ALSO READ: Spy Balloon Saga Between China, US Continues; One Spotted Over Hawaii

FLI Warns About AI-Powered Weapons

Mark Brakel, director of the advocacy organization Future of Life Institute (FLI), warned the public about the risk of using AI-powered weapons as they can go rogue. Brakel noted those weapons "carry a massive risk of unintended escalation."

If AI-powered weapons misinterpret anything, like a beam of sunlight, and see it as a threat, they may target neighboring nations without cause. Brakel claims that the result might be disastrous because AI-powered weapons are like the Norwegian rocket incident, a near-nuclear apocalypse on steroids, and could increase the likelihood of accidents in hotspots like the Taiwan Strait in the absence of considerable human control.

On the other hand, the Department of Defense (DoD) is steadfast in its observance of international humanitarian law and intensely focused on promoting public faith in technology.

RELATED ARTICLE: Iranian 'Suicide' Drones: How Russia's New Favorite Weapon Work?

Check out more news and information on Weapons in Science Times.