OpenAI is taking deep learning to the next level by bringing GPT-4. The newest large multimodal model can now accept multiple inputs.

What Is GPT-4?

GPT-4 is the latest milestone in OpenAI's effort to scale up deep learning. The most significant change is GPT-4 being "multimodal," which allows it to work with both texts and images. However, it can only deliver text outputs.

Although it cannot output photos like generative AI models, it can process and respond to visual inputs. Annette Vee, an associate professor of English at the University of Pittsburgh who studies the intersection of computation and writing, watched a demonstration when the new model was told to identify what was funny about a humorous image, Scientific American reported.

The new model was able to explain it. Vee said it took a certain understanding of the image's context and composition and then associating it with the social understanding of language to do so. She added that ChatGPT couldn't do the same.

Although it's not as capable as humans in real-world scenarios, it exhibits human-level performance on various professional and academic benchmarks.

According to OpenAI, it passes a simulated bar exam with a score around the top 10% of the takers. In contrast, the score of GPT-3.5 was around the bottom 10%.

The distinct difference between the latest version and its predecessor seems subtle, but it is more obvious when the complexity of the task reaches a sufficient threshold. GPT-4 is more reliable and creative. It can also handle more complicated instructions than GPT-3.5. It can also solve complex problems with greater accuracy.

GPT-4 went through six months of safety training and internal tests. It was reportedly 82% less likely to respond to requests for disallowed content and 40% more likely to deliver factual responses compared to GPT-3.5.

OpenAI CEO Sam Altman described the new model in a tweet as the "most capable and aligned model yet."

Although it's an improved version, there are still problems from the previous version that are left unresolved. According to The Verge, among the problems retained are the tendency to make up information (or hallucinate) and the capacity to generate violate and harmful text.

 

ALSO READ: Anthropic's AI System Claude Tweaks OpenAI's ChatGPT to Align It With Human Intentions Thru Its 'Constitutional AI'

GPT Models Caused Concern

OpenAI delayed the release of GPT models for fear that they would be used for malicious purposes like generating spam and misinformation. In 2022, the company launched ChatGPT, a conversational chatbot based on GPT-3.5.

According to a previous report from Science Times, ChatGPT makes writing content for freelancers, content creators, and bloggers easier, more convenient, and faster. It also creates SEO-worthy content.

It can also answer complex questions conversationally. However, it still has limitations because it was programmed to avoid toxic or harmful responses.

Also, there's a downside that many might not notice at first. This type of technology can cost one's thinking, critical faculties, and intellectual curiosity. It could also rob the people of their ability to research, read and create original literature.

Despite that, the arrival of ChatGPT triggered a frenzy and sparked some tech giants to do the same. Microsoft even developed its own AI chatbot Bing, part of the Bing search engine. Google is also scrambling to catch up.

However, per The Verge, many users broke Bing's guardrails creatively, and the bot offered dangerous advice and made-up information. Some also reportedly threatened the users.

RELATED ARTICLE: AI (Artificial Intelligence) Bot GPT-3 Finished a 500-Word Academic Thesis

Check out more news and information on Technology in Science Times.