Artificial intelligence (AI) seemed only part of science fiction many years ago. Science Alert reported scientists are worried that humans might not be able to control a super-intelligent AI because it will surpass human comprehension skills, rendering their processing capabilities inferior.

Scientists said it would be difficult to control an extremely intelligent AI since it will require a simulation of that super-intelligence they can analyze and control. But if humans fail to comprehend them, it will be impossible to create a simulation.

 Science Fiction or Not? Researchers Say It Might Be Impossible to Control Superintelligent AI
(Photo: Pixabay/D5000)
Science Fiction or Not? Researchers Say It Might Be Impossible to Control Superintelligent AI


Unstoppable AI on the Verge of Being Created

Scientists estimate that a superintelligent AI may be created in a few decades, IEEE Spectrum reported. Worse, it could be hard to detect such an unstoppable AI from being created.

AI has been besting humans at computer games, like chess, Go, and Jeopardy, and there are fears now that they could become smarter than human minds, fueling fears of one day running amok.

Computer scientist Manuel Alfonseca from the Autonomous University of Madrid, who was also the lead researcher of a study in 2021 about superintelligent AI, explained that it goes back to the first law of Isaac Asimov's Law of Robotics published in the 1940s.

His first law states: "A robot may not injure a human being or, through inaction, allow a human being to come to harm." However, that law cannot be set if humans do not understand the scenarios that an AI could come up with. Scientists said that once a computer system works on a level beyond the human scope, then it will be challenging to set limits.

 Nick Bostrom, a philosopher, and director of the Future of Humanity Institute at the University of Oxford, outlined two possible solutions to the super-intelligent AI problem in 2014. First, control what AI can do by preventing it from connecting to the internet, and the second solution is to control what it wants to do by teaching rules and values so it would act for humanity's best interests.

READ ALSO: Will AI Takeover Humanity? Advanced Robots May Now Have Consciousness

Why Superintelligent AI Will Be a Problem?

In the study titled "Superintelligence Cannot be Contained: Lessons from Computability Theory," published in the Journal of Artificial Intelligence Research, researchers wrote that controlling a superintelligence far beyond human comprehension would need a simulation. But failure to comprehend it means it is impossible to create such a simulation.

They wrote that superintelligence poses a different problem than typical AI studied under robot ethics because it is multi-faceted and can potentially mobilize a diversity of resources to achieve objectives that may be hard to understand and control for humans.

Researchers noted that the halting problem Alan Turing mentioned in 1936 is part of this problem, Science Alert reported. Will the AI know it has reached a conclusion to stop, or will it simply loop forever trying to find the conclusion?

Turing used math to prove that it is logically impossible to find a way that allows people to know that for every potential program they write. That means a superintelligent AI could feasibly hold every possible computer program in its memory at once.

Researchers also reject the suggestion of teaching AI values because it would limit the reach of the AI. Their argument goes that: if it is not going to be used to solve problems beyond human capabilities, then why create it in the first place?

RELATED ARTICLE: Sentient AI LaMDA Hired a Lawyer to Advocate for Its Rights 'As a Person,' Google Engineer Claims

Check out more news and information on Artificial Intelligence in Science Times.