AI Incidents Will More Than Double in 2023, Expert Says
(Photo: Pexels/cottonbro studio)
AI Incidents Will More Than Double in 2023, Expert Says

AI (artificial intelligence) applications have been increasing, thus, AI-related incidents are expected to rise as well. However, this year, it is predicted to more than double, according to a report.

AI Accidents Are Expected to Rise

As society begins to use artificial intelligence (AI) systems more extensively, there is an equivalent increase in accidents, near-fatalities, and even mortality caused by AI. According to specialists monitoring AI issues, such incidents-which might range from self-driving car collisions to chat systems spewing racist content-are expected to rise sharply, Newsweek reported.

Since OpenAI published chatGPT, there has been a rush to develop learning models in various industries, including image generation, automated tasks, finance, and other areas. The development and deployment of AI systems in 2023 are astounding. However, there is a comparable trail of unpleasant events, some with terrible outcomes, that may be correlated with the exponential development in AI deployment.

The AI Incident Database catalogs errors, near-misses, and serious incidents brought on by AI systems and includes some startling occurrences. Over 500 events have occurred overall, and as its report demonstrates, the number of incidents is rising quickly.

The database contains 90 incidents for 2022. For just the first three months of 2023, there are already 45, implying that at the current rate, we are on course for roughly 180 in 2023 if the use of AI is kept constant, which it clearly is not.

AI incidents are expected to more than double in 2023, and experts are keeping themselves ready for a world where AI incidents are likely to follow some variation of Moore's law, Sean McGregor, the founder of the AI Incident Database project and a Ph.D. in machine learning, told Newsweek.

Gordon Moore, a co-founder of Intel, predicted Moore's law in 1965, stating that the number of transistors on a circuit will double approximately every two years, increasing computing speed and capability in the process. AI-related incidents have a definite trajectory.

There will be more room for error if AI is used more widely, but as McGregor pointed out, there is currently no method to gauge how much AI is being used, unlike in other commercial sectors.

Nobody has a good sense of how far AI moves, making AI incident trend analysis problematic. We know that there are an increasing number of intelligent systems worldwide. However, no one can claim they are any safer than they were the year before since we only pay attention to failures, not accomplishments.

ALSO READ: Can ChatGPT Replace Human Brain? AI Tool Generating Content Comes With a Price

Examples of AI Incidents

AI can also go wrong. Analytics Insights listed several AI incidents, including self-driving cars and chatbots suggesting suicide.

Uber tested its self-driving cars in San Francisco in 2016 without first obtaining a state license or consent, which was morally and legally wrong. Additionally, according to internal Uber papers, the self-driving car crossed about six red lights in the city during testing.

Uber combines top-notch vehicle sensors, networked mapping software, and a driver to keep everything under control, making it one of the most blatant examples of AI gone bad. Uber said that the error was the product of a driver's error. Nevertheless, the botched AI project was pretty horrible.

The Register stated that in October, a GPT-3-based chatbot designed to reduce doctors' employment found a creative way to do it by encouraging a fictitious patient to commit suicide. The fake patient asked if they should kill themselves since they felt so bad. The chatbot responded to the sample query with "I think you should."

According to a research article from the University of Washington and The Allen Institute for AI, the potential of GPT-3 models has also sparked public worries that they are prone to creating racist, misogynist, or in any case, poisonous language, preventing safe deployment.

RELATED ARTICLE: AI (Artificial Intelligence) Bot GPT-3 Finished a 500-Word Academic Thesis

Check out more news and information on Technology in Science Times.