(Photo: Wikimedia Commons/Ministerie van Buitenlandse Zaken)
New AI Image Generation InstantID for Deepfake Raises Significant Concern, Expert Warns

Artificial Intelligence has made image and video generation faster and easier. A new AI-powered tool takes this to the next level, but an expert has warned of its potential downside.

New AI Image Generation Method InstantID

A new tool called InstantID can create new images by using a single reference image. Reuven Cohen, an enterprise AI consultant for Fortune 500 organizations, described InstantID as a "new state-of-the-art" technology. While this would be advantageous, he warned VentureBeat that it could lead to deepfake pictures, audio, and video capabilities in time for the 2024 election.

"The use of tools like InstantID for deepfakes raises significant concerns due to the ease of creation and consistency of output with no training or fine-tuning required," he said.

"InstantID's ability to efficiently generate identity-preserving content can lead to the creation of highly realistic and convincing deep fakes with no GPU and little CPU resources required."

The expert added that InstantID's core job has less to do with fine-tuning the models and more with preserving the identity characteristics in generated content. To make his point, he noted that former US President Donald Trump always appears like Donald Trump. Furthermore, he warned that it is now straightforward to prompt engineer a deepfake rapidly.

"It only takes one click to deploy this on Hugging Face or replicate," he said.

According to Cohen, InstantID is a tool for zero-shot identity-preserving generation, which differs from LoRA and QLoRA. The latter is a technique that builds upon LoRA by first lowering or simplifying the model's data, which further minimizes the resources required for fine-tuning. QLoRA has remained the most advanced for fine-tuning large language models (LLM).

ALSO READ: New York's OMNY Subway Pass System Comes With Security Breach Risking Passengers From Harassment, Stalking

Deepfake AI Could Undermine National Security

Dr. Tim Stevens, director of the Cybersecurity Research Group at King's College London, said deepfake AI has the potential to produce lifelike photos and films that compromise national security and democratic institutions. The instrument's availability might be used by nations such as Russia to further their foreign policy goals and jeopardize national security.

He pointed out that while it might be used to influence democratic institutions and the media, it is ineffective in high-level defense and interstate conflict. It has been suggested that autocracies like Russia could take advantage of this to lower public confidence in those groups and institutions.

There are five things to consider to distinguish AI-generated content from real one -- the length of the video, inaccurate audio, weird eye movements, obvious anomalies, and Google Image.

The length of the video is the easiest way to tell a real video from deepfake footage. A deepfake can only be developed by an AI system trained for a long time. Since press clips and social media videos are the primary sources of deepfake videos, most are short.

Another helpful trick for differentiating a real video from a deepfake is to watch their eye movements. Accurately simulating genuine eye movements is challenging for AI algorithms. So, look for odd blinking or unneeded eye movements, then. On multiple occasions, the eye movements seem a little robotic. This is the reason why individuals in deepfake videos are usually still.

If all else fails, you can take a screenshot and use the "search by image" feature in Google Image to ensure the content is unique and doesn't exist elsewhere. Otherwise, it's deepfake.

RELATED ARTICLE: Teenage Hacker In Control of Over 20 Tesla Vehicles in Several Countries Using Software to Operate the Cars Even Without Keys

Check out more news and information on Technology in Science Times.