OpenAI has become one of the top artificial intelligence (AI) research labs in the world for four years. It has been known for its consistently headline-grabbing research together with Alphabet's DeepMind, another AI heavyweight.

According to MIT Technology Review, OpenAI is honored for its mission and goal to become the first company to create AGI, a machine learning that can reason like humans. OpenAI has once clarified that this would not cause world domination of machines but ensure that technology is safely developed and could help everyone on the planet.

Researchers have considered that AGI could run amok, and the narrow intelligence present in people's everyday life has already served as an example.

But still, researchers have not yet perfected OpenAI and so there might be some glitches from time to time. For example, a recent report from The Verge said that OpenAI could simply be fooled using handwritten notes by simply sticking it to the object.

 OpenAI Was Deceived by No More Than Handwritten Notes
(Photo : Pixabay)
OpenAI Was Deceived by No More Than Handwritten Notes

Pen and Paper Fools OpenAI

The news outlet reported that OpenAI's state-of-the-art computer vision system could be fooled by merely a less sophisticated pen and paper. As shown in the picture below, the simple handwritten note stick to an object is enough to trick OpenAI into misidentifying it.

The researchers wrote in the blog post that these attacks are called typographic attacks. They found that photographs of handwritten notes can often deceive the model when its ability to read texts is exploited.

These attacks are similar to "adversarial images" that can trick computer visual systems but are far easier to produce.

ALSO READ: Can A Robot Write A Theater Play? The Unusual Collaboration Between AI, Robotics, and Theater


Implications on the Systems That Rely on Computer Vision Systems

Adversarial or contradictory images pose a real danger to the systems that rely on computer vision systems, according to Variety Info.

For instance, the researchers from OpeAI demonstrated that this particular example could trick the software on the self-driving cars of Tesla to change lanes without warning by putting handwritten notes on the road.

Moreover, they noted that such attacks are a serious threat to many AI applications, such as those in the field of medicine and the military.

Contradictory images pose a real threat, but this particular example is not too serious," the researchers said.

At least for now, the danger presented by typographic attacks is not a reason for people to be concerned as it is not too serious.

The OpenAI in the experiment is just an experimental system known as Contrastive Language-Image Pre-training (CLIP), that learns visual concepts from Natural Language Processing (NLP) and is not used in any commercial products.

The very nature of the unusual machine learning architecture of CLIP has presented it with a weakness that allows for typographic attacks to exist.

RELATED ARTICLE: AI Wrote An Op-Ed Convincing Humans That Robots Will Not Replace Humans


Check out more news and information on Artificial Intelligence and OpenAI on Science Times.