Microsoft Launches Deepfake Detection Tool As U.S. Election Misinformation Goes Nuclear

Microsoft Deepfake
The proliferation of 'fake news' on social media is not just annoying, it has the potential to influence elections and policies, depending on the subject matter and how believable the content. In an effort to weed out misleading content and "combat disinformation," Microsoft is launching its Video Authenticator tool to detect so-called deepfakes.

A 'deepfake' is a type of synthetic media—photos, videos, or audio files—that has been manipulated by artificial intelligence, and can sometimes be hard to spot. That's not always the case. In some instances, deepfakes are creepily obvious. But in other instances, a deepfake can make it seem as though a person said or did something, when really it is all just computer generated.

This has obviously implications in elections, hence the timing of 0 release—it falls just ahead of the upcoming United States presidential election.

"In the short run, such as the upcoming US election, advanced detection technologies can be a useful tool to help discerning users identify deepfakes... Video Authenticator can analyze a still photo or video to provide a percentage chance, or confidence score, that the media is artificially manipulated. In the case of a video, it can provide this percentage in real-time on each frame as the video plays," Microsoft explains.

Steve Buscemi and Jennifer Lawrence Deepfake

Part of the magic happens by detecting the blending boundary of the deepfake and looking for subtle fading or grayscale elements that could go unnoticed by the naked eye. It is the culmination of a joint effort between by Microsoft Research, Microsoft's Responsible AI team, and the Microsoft AI, Ethics and Effects in Engineering and Research Committee.

Microsoft says it developed developed the Video Authenticator tool using a public dataset from Face Forensic++, and tested the tool on the DeepFake Detection Challenge Dataset.

"We expect that methods for generating synthetic media will continue to grow in sophistication. As all AI detection methods have rates of failure, we have to understand and be ready to respond to deepfakes that slip through detection methods," Microsoft acknowledges.

Microsoft also announced a new technology that has two parts, both aimed at sifting through fake news. One is having an authentication tool built into Microsoft Azure, and other is reader that can be built into a browser extension, which checks the certificates of online content and matches the hashes. This allows the reader to know with a high degree of accuracy if the content they are viewing is authentic, as well as details about who made the content.