Tech pioneer Microsoft has launched a new tool to help determine "deepfake" photos and videos, which have become an increasingly common tool for online disinformation.
Deepfake, a portmanteau of the terms "deep learning" and "fake," refers to synthetic photos, videos, or audios that have been altered using AI or deep learning tech, creating hard-to-detect media that can fool people. The technology has been used to influence campaigns, according to a study. Microsoft announced its new tech on a press release dated September 1.
From Misinformation to Defamation
A research work led by Professor Jacob Shapiro from Princeton University, done with support from Microsoft, was able to catalog 96 different foreign-influenced campaigns that have targeted 30 countries from 2013 to 2019.
The social media campaigns targeted important people by defaming them, persuading public opinion, and even to influence debates. Of all recorded instances, 26 percent of the misinformation campaigns targeted the United States. Other countries targeted by the malicious efforts include Australia, Canada, Germany, France, the Netherlands, Saudi Arabia, the United Kingdom, Armenia, Brazil, Poland, South Africa, Taiwan, Ukraine, and Yemen.
Also, according to the Microsoft-supported study, about three in five foreign influence efforts (FIE) employ three approaches - amplify, create, and distort - with domestic influence effort (DIE) using the same strategies in four out of five cases. Furthermore, creating original content in misinformation is used for 93 percent of FIEs and 90 percent of DIEs, amplification of existing media is used for 74 percent (FIE) and 95 percent (DIE). In comparison, 74 percent (FIE) and 90 percent (DIE) use the distortion of verifiable facts.
The most common platforms used in foreign influence efforts include Twitter, Facebook, and news outlets. Twitter has been found in 86 percent of FIEs and 75 percent of DIEs, while Facebook accounts for 70 percent of foreign efforts and 79 percent of domestic efforts.
The study also observed that "inauthentic Chinese social media accounts" were promoting pro-China narratives in international issues. These issues include such events as Taiwan relations, the response to the global coronavirus pandemic, and even the George Floyd protests in the U.S.
<iframe width="560" height="315" src="https://www.youtube.com/embed/gLoI9hAX9dw" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
The Microsoft Video Authenticator
To combat the increasing prevalence of misinformation, Microsoft has developed two different technologies to combat specific aspects of the issue specifically.
On the deepfakes front, Microsoft introduces its Microsoft Video Authenticator, which analyzes photos and video and generates a confidence score that the suspected media is, in fact, artificially manipulated.
For videos, the Video Authenticator provides real-time confidence grading as it analyzes the sample, frame by frame. The new tech finds the blending boundary of the AI-assisted manipulation, as well as subtle fading and greyscale elements that might pass unnoticed by the human eye.
Microsoft Research initially developed the tech, along with its Responsible AI team and its AI, Ethics, and Effects in Engineering and Research (AETHER) Committee - Microsoft's advisory board that ensures responsible development of its technology.
The Microsoft Video Authenticator was developed with the help of a Face Forensic++ public dataset, tested on the DeepFake Detection Challenge Dataset. Both datasets are the leading models for deepfake detection technologies.
The other technology is a pair of hashes and a certification generator and a reader that checks these certificates. The first half is a Microsoft Azure tool, allowing the content creators to add digital hashes and credentials to their work. Its other half, existing as a browser extension or other future forms, will be a reader that checks the digital hashes and certificates as an authentication measure.