Facebook Bans Deepfakes From Its Platform As 2020 Election Season Heats Up
Facebook announced it plans to crack down on the proliferation of video content that has been manipulated in certain ways to appear as authentic and intentionally misleading. Also known as "deepfakes," these kinds of videos use artificial intelligence and machine learning techniques to superimpose content onto a video clip.
The timing of Facebook's decision to beef up its enforcement of misleading videos comes during the early part of a US presidential election year. This is the fourth year of US President Donald Trump serving as the command in chief, and barring an unlikely eviction through an impeachment trial in the Republican-led Senate, he will seek re-election.
It is important to note that Facebook is not taking a political stance in favor of one party over another, but seeking to mitigate the effect of fake news, and deepfake videos in particular. Facebook says deepfake videos intended to mislead people are rare, but also "present a significant challenge for our industry and society as their use increases."
According to Facebook, it is not going at this alone, but is formulating policies and detection criteria through conversations with more than 50 global experts with technical, policy, media, legal, civic, and academic backgrounds. Through those conversations and policies, Facebook as decided to remove deepfake videos from its platform meets the following criteria...
- It has been edited or synthesized—beyond adjustments for clarity or quality—in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say. And:
- It is the product of artificial intelligence or machine learning that merges, replaces or superimposes content onto a video, making it appear to be authentic.
This is not a wholesale ban of deepfake videos, as parodies, satires, and other seemingly benign manipulations (like deepfake celebrity impressions) are still allowed.
While Facebook has drawn a hard line between what is and is not acceptable, it will also review deepfake content that straddles that line. In other words, just because a deepfake does not meet the above criteria, it does not necessarily mean Facebook won't take some kind of action. Instead, those kinds of videos will be subject to review by one of Facebok's independent third-party fact checkers.
"If a photo or video is rated false or partly false by a fact checker, we significantly reduce its distribution in News Feed and reject it if it’s being run as an ad. And critically, people who see it, try to share it, or have already shared it, will see warnings alerting them that it’s false," Facebook says.
Facebook's reasoning is that removing these kinds of videos outright has little impact on the problem at large, as they still remain viewable around the web. However, by permitting them to remain and labeling them as false, the social network is hoping users will be better informed on this kind of thing.
We will have to see how this plays out. In theory, it means that deepfakes like the one mashing up Steve Buscemi's face with Jennifer Lawrence will still find their way to Facebook, while ones intended to sway voters against an election candidate through misleading means will either be removed or labeled as false.