Artificial intelligence has always been the most famous topic since the birth of the industrial revolution. Although there have been numerous advantages that the present technologies offered to ease the workload and tasks of many human-driven jobs, doubt and fear towards the advancements persist.

Many studies conducted by the scientific community relied on the power of artificial superintelligence, and most of them were not possible to grasp any conclusive information without it. On the other hand, separate investigations theorized how the aspect could inflict the existence of humanity and eventually eliminate the entire civilization.

Bad Fate of Humanity in the Hands of Artificial Superintelligence

IRAQ-CONFLICT-TECHNOLOGY-ROBOT-RESTAURANT
(Photo: ZAID AL-OBEIDI/AFP via Getty Images)
A robot waiter carries a payment bill to patrons at the "White Fox" restaurant in the eastern part (left bank of the Tigris river) of Iraq's northern city of Mosul on November 17, 2021. - From the rubble of Iraq's war-ravaged Mosul arises the strange sight of androids gliding back and forth in a restaurant to serve their amused clientele. The futuristic servers are the result of technology developed in part in the northern city, formerly the stronghold of the militant Islamic State group.

Like any other scientific guess, one potential solution could limit or even eradicate the unfortunate nightmare caused by machine intelligence over human civilization. The surprising part of this unique idea is that humanity would not do anything to fight off robots or other inventions. Instead, the superb intelligence embedded in machines is theorized as the key to the demise of the machines themselves.

According to the report by Gizmodo, the argument states that smart technological architectures would not simply find any inspirations or specific motivations that would make them push humanity to the brink of extinction. But in observations, the solution deems impossible due to certain factors that are already futile.

There are already gateways available today that could make a higher tier of machine intelligence achievable. Today, nanotechnology, information and communication systems, genetic engineering, and even neurological studies collectively show a promising capability to develop enhanced human and nonhuman animal brain systems.

Separately, brain stimulation and emulation, cognitive research, and computer science can all partake to create artificial intelligence that could match or surpass the current state of intelligence composed in the human species.

Intelligent machines, although powerful, have their weakness. The most common cause of failing technologies is faulty systems curated by humans. Another problem that may ensue at any time in our age is the weaponization of these systems. According to the Center for Future Mind director and 'Artificial You: AI and the Future of the Mind' author Susan Schneider, the issues correlated to intelligent machines are called control problems.

ALSO READ: AI Can Improve Lightning Forecast to Enhance Safety Warnings, Create Accurate Long-Range Climate Models


Control Problem and Moral Codes: How to Control AI and Prevent Human Eradication

According to the expert, the control problem is a type of technology-related conundrum in which we encounter challenges in the manipulation and management of artificial intelligence smarter than us. The biggest threat that the control problem presents is that, once the machine systems fully surpass the observable level of intelligence in our age, humans would not be able to contain them, and estimating any expected response from them may become impossible.

Schneider portrayed the control problems through an old analogy in which three wishes to a genie 'never goes well' due to insufficient instructions and other specifications on each of the requests. Schneider implied that, similar to what happens in these stories, machine intelligence has a high chance of producing unwanted results if human-induced commands and requests are not specialized in a very detailed manner.

Machine Institute for Artificial Intelligence expert Eliezer Yudkowsky said in a report that they regard artificial superintelligence as equivalent to optimization processes. AI could truly influence the real world and may affect large outlets of possibilities larger than we can comprehend. Yudkowsky added that moral considerations could help humans avoid issues regarding intelligent machines, although our overall efforts could not still take each of the outcomes we may fall upon due to an immense intelligence.

On the contrary, the University of Louisville expert Roman Yampolskiy said that the only method to predict artificial intelligence makes humans far more intelligent. In conclusion, there seems no approach that could break the loop of AI-induced threat other than ultimately banning superintelligent AIs, an absurd but reasonable choice.

RELATED ARTICLE: Human Brain Cells Learn This Game For Only 5 Minutes, Artificial Intelligence Takes 1.5 Hours to Pick Up

Check out more news and information on Artificial Intelligence in Science Times.