Google Battles Controversial Deepfakes By Releasing Thousands Of Its Own Deepfakes
How do you defeat “deepfakes”? According to Google, you develop more of them. Google just released a large, free database of deepfake videos to help research develop detection tools.
Google collaborated with “Jigsaw”, a tech “incubator” founded by Google, and the FaceForesenics Benchmark Program at the Technical University of Munich and the University Federico II of Naples. They worked with several paid actors to create hundreds of real videos and then used popular deepfake technologies to generate thousands of fake videos.
Researchers will be able to freely access both the fake and real videos to develop “synthetic video detection methods”. It is hoped that these emerging detection tools will be able to prevent harm and misuse. Google also promises to continuously update the database “as deepfake technology evolves over time”.
The new database is part of Google’s efforts to spearhead artificial intelligence (AI) best practices. This past winter they also released a database of synthetic speech to help researchers create fake audio detectors. Hoya, iFlytek, and several other institutions and universities also contributed to this large database.
The deepfake database will hopefully help in detecting the upcoming onslaught of deepfake videos. Dr. Hao Li, an associate of computer science at the University of Southern California, recently predicted that we are only 6-12 months away from “perfectly real” deepfake videos. He remarked, “Soon, it's going to get to the point where there is no way that we can actually detect [deepfakes] anymore, so we have to look at other types of solutions.” There are already a number of deepfake videos of political leaders and celebrities that are quite convincing.
Even poorly created deepfake videos and photos can cause a lot of damage. The “DeepNude” app leveraged machine learning and AI to “disrobe” women. The machine learning used a “GAN”, or generative adversarial network, to create realistic images.The images were not real, but they were frightening and invasive. Danielle Citron, a law professor at the Boston University School of Law, told Motherboard, "As a deepfake victim said to me-- it felt like thousands saw her naked, she felt her body wasn’t her own anymore.” The creepy app has thankfully been taken down thanks to a wave of social media backlash.