Researchers at Adobe and UC Berkeley are collaborating on a method to detect facial manipulations made to digital photos in Photoshop. While development is in the early stages, it is part of a broader effort across Adobe to better detect image, video, audio, and document manipulations in today's landscape of fake news.
Sometimes it is as easy as pie to detect a manipulated image. Take the photo at the top of this article, for instance—our knowledge of the world tell us that dogs do not normally dress up in suits and walk around like humans, and so that is obviously an altered image. But in other cases, the changes are not so blatant.
That in and of itself is not a problem. It is when altered images and videos are used for nefarious purposes that it becomes an issue. Researchers from Dessa recently demonstrated how this can be a problem by cloning Joe Rogan's voice using artificial intelligence, and then tasking listeners to discern between actual audio clips and ones created with AI. It's not easy.
The same is true of images and videos, and Adobe says it recognizes the "ethical implications" of its technology.
"Trust in what we see is increasingly important in a world where image editing has become ubiquitous—fake content is a serious and increasingly pressing issue. Adobe is firmly committed to finding the most useful and responsible ways to bring new technologies to life—continually exploring using new technologies, such as artificial intelligence, to increase trust and authority in digital media," Adobe stated in a blog post.
Adobe researchers Richard Zhang and Oliver Wang have been working with UC Berkeley collaborators Sheng-Yu Wang, Dr. Andrew Owens, and Professor Alexei A. Efros on a technology to detect edits. It consists of training a Convolutional Neural Network (CNN), which is a form of deep learning, with a set of images. It contained a mix of thousands of images scraped from the Internet and ones that a hired artist had altered.
Even though it is at the early stages, the method works fairly well, especially compared to human observation.
"We started by showing image pairs (an original and an alteration) to people who knew that one of the faces was altered," Oliver says. "For this approach to be useful, it should be able to perform significantly better than the human eye at identifying edited faces."
Human eyes were able to successfully discern between original and altered images around half the time (53 percent), while the neural network tool was near perfect (99 percent). In addition, the tool was able to pinpoint areas and methods of facial warping, and then use that information to revert altered images back to their original states.
"It might sound impossible because there are so many variations of facial geometry possible," says Professor Alexei A. Efros, UC Berkeley. "But, in this case, because deep learning can look at a combination of low-level image data, such as warping artifacts, as well as higher level cues such as layout, it seems to work."
While exciting, Richard notes that a "magic universal 'undo' button" for altered images "is still far from reality."
Image Source: SarahRichterArt via Pixabay