Cornell Researchers Develop An Invisible Light-Based Watermark To Combat Deepfakes
According to the researchers, the tool uses light sources to create a hidden watermark. Abe Davis, assistant professor at Cornell, explains how a fake video can be identified using this method. He says, "Each watermark carries a low-fidelity time-stamped version of the unmanipulated video under slightly different lighting. We call these code videos." He added that when a video is manipulated, the altered parts will contradict what is in the code videos, making it easy to point out where the changes were made.
This discovery is coming amid a growing trend of the abuse of artificial intelligence. For instance, we reported a wave of sophisticated cyberattacks in which malicious actors deep-fake company executives via Zoom calls. To checkmate the abuse of its AI models, Google has revealed that all media content created with its new generative AI tool will bear a SynthID digital watermark. While no single tool can completely prevent malicious actors from exploiting AI, it's hoped that with combined efforts from various players, misinformation will ultimately be curbed.
Top image credit: Abe Davis's Research