Cornell Researchers Develop An Invisible Light-Based Watermark To Combat Deepfakes

hero watermark ai video deepfakes light based cornell
It's all too common to find edited videos or still images of individuals doing or saying things they never did. This disturbing trend, if not curbed, could cause serious consequences. To further combat misinformation, a group of researchers at Cornell Ann S. Bowers College of Computing and Information Science has developed a tool they claim can identify a fake or manipulated video.

According to the researchers, the tool uses light sources to create a hidden watermark. Abe Davis, assistant professor at Cornell, explains how a fake video can be identified using this method. He says, "Each watermark carries a low-fidelity time-stamped version of the unmanipulated video under slightly different lighting. We call these code videos." He added that when a video is manipulated, the altered parts will contradict what is in the code videos, making it easy to point out where the changes were made.


To implement this method, the team revealed that coded light sources such as computer screens and certain types of room lighting can be programmed with software to hide the coding. For other types of lighting, the same effect can be achieved by retrofitting them with some sort of tiny chip adapter.

This discovery is coming amid a growing trend of the abuse of artificial intelligence. For instance, we reported a wave of sophisticated cyberattacks in which malicious actors deep-fake company executives via Zoom calls. To checkmate the abuse of its AI models, Google has revealed that all media content created with its new generative AI tool will bear a SynthID digital watermark. While no single tool can completely prevent malicious actors from exploiting AI, it's hoped that with combined efforts from various players, misinformation will ultimately be curbed.

Top image credit: Abe Davis's Research