Nikon, Canon, Sony Corporation Create Web Tool Verify To Fight Against AI-Generated Images, Deepfake
(Photo: Pexels/Donald Tong)
Nikon, Canon, Sony Corporation Create Web Tool Verify To Fight Against AI-Generated Images, Deepfake

Camera companies are working together to fight AI-generated images like deepfakes. Among their plan is to add digital signatures in the photos.

Nikon, Sony, Canon Against Deepfakes

Camera makers are already working on fighting images generated using artificial intelligence (AI). For instance, Nikon intends to begin selling mirrorless cameras with integrated authentication technology to professional photographers and photojournalists. The system will incorporate tamper-resistant digital signatures, such as the location and photographer of a photo, together with the date and time it was taken.

A consortium of news outlets, manufacturers of cameras, and technology firms has collaborated to develop Verify, a free online tool for confirming the legitimacy of photos.

The website will show the date, location, and further credentials for the image if it has a digital signature.

Sony, Canon, and Nikon have now embraced the authentication technique. This year, Sony will update its professional-grade mirrorless SLRs with firmware that includes digital signatures.

The company also intends to increase the number of compatible camera types and encourage others to follow suit. As early as next year, Canon wants to offer a camera incorporating this technology. This past October, the Associated Press and Sony tested the technology.

Some other firms are investigating methods of detecting AI photos and labeling them as actual images. Intel created a method last year to assess if an image was authentic by examining variations in skin tone. In August, Google also unveiled a technology that allows users to watermark AI-generated photos invisibly.

The prevalence of AI images worldwide makes it more difficult to verify their legitimacy. A widely used free AI image detector earlier this year incorrectly identified a picture of a baby slain in Hamas' most recent attack on Israel as being created by AI, even though it is probably real.

ALSO READ: New York's OMNY Subway Pass System Comes With Security Breach Risking Passengers From Harassment, Stalking

How to Distinguish Deepfake Content?

Convincing audio, video, and image frauds can be produced using deepfake AI. The phrase refers to both the technology and the erroneous data that results from deep learning, combining the terms fake and deep learning.

Deepfake employs a discriminator and a generator algorithm to create and refine bogus material. While the generator generates the initial fake digital content using a training data set based on the desired output, the discriminator assesses how realistic or false the original version is. By repeating this process, both the discriminator and the generator can find better faults that need to be fixed by the generator.

Fortunately, there are ways to distinguish a deepfake content from a real one. First, the length of the video. Deepfake videos are brief and usually sourced from press clips or social media videos. The audio is also inaccurate due to challenges in copying the person's accent and pitch, so culprits either add music or skip the audio part.

It's also recommended to look for eye movement because in deepfakes, this tends to be robotic. So, in most cases, they use motionless subjects. Additionally, lighting, shadows, and skin tones are inconsistent in AI-generated images and videos.

RELATED ARTICLE: Teenage Hacker In Control of Over 20 Tesla Vehicles in Several Countries Using Software to Operate the Cars Even Without Keys

Check out more news and information on Technology in Science Times.