Deepfakes have been garnering attention over the past several years. Many are amazed at how Deepfakes can be used to bring their deceased relatives to momentary life. However, the promising technology has also been used for villainous purposes.
Deepfakes have been used to insert unsuspecting people into porn, spreading disinformation campaigns, doctoring images, and many more.
Luckily scientists have developed a simple tool that can help detect Deepfakes.
What are Deepfakes?
Deepfakes are the 21st-century version of Photoshopping using AI technologies called deep learning in making fake events, hence the term "Deepfake."
Deeptrace, an AI firm found 15,000 Deepfake videos on the internet as of September 2019 where 96% were pornographic.
Danielle Citron, a law professor at Coston University tells The Guardian that advancements in Deepfake technology are currently being weaponized against women.
The term "Deepfake" was first coined by a Reddit user of the same name in 2017 who created an online space where pornographic videos used face-swapping technologies.
Cracking Down on Deepfakes
The pre-print study entitled, "Exposing Gan-Generated Faces Using Inconsistent Corneal Specular Highlights" details the AI tool that provides a simple way of spotting Deepfakes: based on how light reflects on the image's eyes
Scientists noticed that Sophisticated Generative Adversary Network (GAN) Models have drastically evolved and have now been able to synthesize realistic human faces that are different to detect from real ones.
Hence, researchers from the University of Buffalo used the rapid advancement of AI technology in testing portrait-style images, where a 94% accuracy in detecting Deepfake images was reported.
The AI exposed deepfakes by analyzing corneas of the images that have a mirror-like surface, generating reflective patterns when light is shone.
In real face images taken by a conventional camera, reflections on both eyes should be similar due to the fact that the same image is being seen. However, Deepfaked images synthesized by GANs commonly fail to accurately capture the resemblance to real-life corneas.
Instead, GAN synthesized Deepfakes often show inconsistencies such as different geometric shapes and mismatched reflections.
The AI tool developed by scientists analyzes the discrepancies by mapping out a face and analyzing how light reflects in each eyeball. Then, the AI provides a score that acts as a similarity metric. Smaller scores indicate the high probability that the image was Deepfaked.
The system was effective in detecting Deepfakes from This Person Does Not Exist, a database of images created with StyleGan2 architecture.
On the other hand, the AI system has an obvious flaw--it relies on reflected sources of light in both eyes of the image. The inconsistencies of patterns in the cornea can be manually fixed post-processing. Additionally, if one eye isn't visible, the AI won't work.
The tool was also tested on portrait images. If the face isn't looking towards the camera, the system tends to produce false positives.
Researchers are set to further investigate the issues and improve the effectiveness of the AI system. Currently, it's not capable of detecting sophisticated Deepfakes. However, it can spot crude examples.
Check out more news and information on Tech & Innovation on Science Times.