Hidden Light Codes Could Be New Weapon Against Deepfakes & Video Manipulation
ITHACA, NY – August 12, 2025 – A novel technique embedding hidden, imperceptible codes into fluctuating light patterns promises to revolutionize video authentication and combat the growing threat of manipulated media, including deepfakes. Researchers at Cornell University have developed a system where subtle, human-undetectable flickers in light sources can act as a “watermark” recorded by any camera, revealing alterations to the footage.The technology utilizes an external computer chip to control the brightness of lamps, creating a unique pattern of light fluctuations. This “light code” is akin to an audio watermark used for sound recordings,and is automatically captured in any video recorded under the illuminated conditions.
“When checking the video later, fact-checkers can use this hidden light code to recognize whether the video was later manipulated,” explains the research team. Discrepancies in the code – interruptions or altered rhythms – instantly flag potential tampering. This allows forensic analysis to pinpoint edits like cuts, additions, or replacements within the video.
Crucially, the system also poses a challenge to AI-generated deepfakes. As the flicker code is designed to mimic natural light noise, artificial intelligence attempting to create fake videos will produce random, inconsistent light patterns, immediately revealing the fabrication.
“If someone tries to generate fake videos with AI, the resulting light codes only look like random variations,” says researcher Davis.
The systemS security is further enhanced by the potential to use multiple light sources, each with a different “watermark,” making it significantly harder for attackers to forge the code.
Researchers envision deploying the technology in sensitive locations like press conference venues, interview settings, or even entire buildings like the united Nations headquarters, ensuring any video recorded within these spaces carries the verifiable watermark.
While acknowledging the ongoing “arms race” against misinformation, the team emphasizes that this technology provides a crucial, albeit possibly temporary, advantage in detecting video manipulation. The research was published in ACM Transactions on Graphics (doi: 10.1145/3742892).
Source: Cornell University.