2 Sources
[1]
Flickering lights could help fight misinformation
A group of Cornell computer scientists has unveiled what they believe could be a new tool in the fight against AI‑generated video, deepfakes and doctored clips. The watermarking technique, called "noise‑coded illumination," hides verification data in light itself to help investigators spot doctored videos. The approach, devised by Peter Michael, Zekun Hao, Serge Belongie and assistant professor Abe Davis, was published in the June 27 issue of ACM Transactions on Graphics and will be presented by Michael at SIGGRAPH on August 10. The system adds a barely perceptible flicker to light sources in a scene. Cameras record this pseudo-random pattern even though viewers cannot detect it, and each lamp or screen that flickers carries its own unique code. As an example, imagine a press conference filmed in the White House briefing room. The studio lights would be programmed to flicker with unique codes. If a viral clip from that press conference later circulates with what appears to be an inflammatory statement, investigators can run it through a decoder, and by checking whether the recorded light codes line up, could determine whether the footage was doctored. "Each watermark carries a low‑fidelity, time‑stamped version of the unmanipulated video under slightly different lighting. We call these code videos," said Abe Davis, assistant professor of computer science at Cornell. "When someone manipulates a video, the manipulated parts start to contradict what we see in these code videos, which lets us see where changes were made. And if someone tries to generate fake video with AI, the resulting code videos just look like random variations." While the scientists acknowledge that rapid motion and strong sunlight can hinder the technique's efficacy, they are bullish on its utility in settings like conference‑room presentations, television interviews or lecture‑hall speeches.
[2]
Hiding secret codes in light can protect against fake videos
Fact-checkers may have a new tool in the fight against misinformation. A team of Cornell researchers has developed a way to "watermark" light in videos, which they can use to detect if video is fake or has been manipulated. The idea is to hide information in nearly-invisible fluctuations of lighting at important events and locations, such as interviews and press conferences or even entire buildings, like the United Nations Headquarters. These fluctuations are designed to go unnoticed by humans, but are recorded as a hidden watermark in any video captured under the special lighting, which could be programmed into computer screens, photography lamps and built-in lighting. Each watermarked light source has a secret code that can be used to check for the corresponding watermark in the video and reveal any malicious editing. Peter Michael, a graduate student in the field of computer science who led the work, will present the study, "Noise-Coded Illumination for Forensic and Photometric Video," on Aug. 10 at SIGGRAPH 2025 in Vancouver, British Columbia. Editing video footage in a misleading way is nothing new. But with generative AI and social media, it is faster and easier to spread misinformation than ever before. "Video used to be treated as a source of truth, but that's no longer an assumption we can make," said Abe Davis, assistant professor of computer science in the Cornell Ann S. Bowers College of Computing and Information Science, who first conceived of the idea. "Now you can pretty much create video of whatever you want. That can be fun, but also problematic, because it's only getting harder to tell what's real." To address these concerns, researchers had previously designed techniques to watermark digital video files directly, with tiny changes to specific pixels that can be used to identify unmanipulated footage or tell if a video was created by AI. However, these approaches depend on the video creator using a specific camera or AI model -- a level of compliance that may be unrealistic to expect from potential bad actors. By embedding the code in the lighting, the new method ensures that any real video of the subject contains the secret watermark, regardless of who captured it. The team showed that programmable light sources, like computer screens and certain types of room lighting, can be coded with a small piece of software, while older lights, like many off-the-shelf lamps, can be coded by attaching a small computer chip about the size of a postage stamp. The program on the chip varies the brightness of the light according to the secret code. So, what secret information is hidden in these watermarks, and how does it reveal when video is fake? "Each watermark carries a low-fidelity time-stamped version of the unmanipulated video under slightly different lighting. We call these code videos," Davis said. "When someone manipulates a video, the manipulated parts start to contradict what we see in these code videos, which lets us see where changes were made. And if someone tries to generate fake video with AI, the resulting code videos just look like random variations." Part of the challenge in this work was getting the code to be largely imperceptible to humans. "We used studies from human perception literature to inform our design of the coded light," Michael said. "The code is also designed to look like random variations that already occur in light called 'noise," which also makes it difficult to detect, unless you know the secret code." If an adversary cuts out footage, such as from an interview or political speech, a forensic analyst with the secret code can see the gaps. And if the adversary adds or replaces objects, the altered parts generally appear black in recovered code videos. The team has successfully used up to three separate codes for different lights in the same scene. With each additional code, the patterns become more complicated and harder to fake. "Even if an adversary knows the technique is being used and somehow figures out the codes, their job is still a lot harder," Davis said. "Instead of faking the light for just one video, they have to fake each code video separately, and all those fakes have to agree with each other." They have also verified that this approach works in some outdoor settings and on people with different skin tones. Davis and Michael caution, however, that the fight against misinformation is an arms race, and adversaries will continue to devise new ways to deceive. "This is an important ongoing problem," Davis said. "It's not going to go away, and in fact, it's only going to get harder."
Share
Copy Link
Cornell computer scientists have created a new watermarking technique called "noise-coded illumination" that hides verification data in light to help detect doctored videos and combat AI-generated deepfakes.
In a groundbreaking development, Cornell University researchers have unveiled a novel approach to combat the growing threat of AI-generated deepfakes and video manipulation. The technique, dubbed "noise-coded illumination," embeds verification data directly into light sources, creating a powerful tool for detecting doctored videos 1.
Source: Tech Xplore
The system introduces a barely perceptible flicker to light sources in a scene, each carrying a unique code. While invisible to the human eye, these pseudo-random patterns are captured by cameras. Peter Michael, Zekun Hao, Serge Belongie, and assistant professor Abe Davis, the minds behind this innovation, explain that each watermark contains a low-fidelity, time-stamped version of the unmanipulated video under slightly different lighting conditions 2.
Imagine a White House press conference where studio lights are programmed with unique flicker codes. If a manipulated clip from this event circulates later, investigators can run it through a decoder to check if the recorded light codes align, thereby determining the footage's authenticity 1.
Unlike previous digital watermarking techniques that modify specific pixels in video files, this method embeds the code in the lighting itself. This ensures that any real video of the subject contains the secret watermark, regardless of who captured it or what camera was used 2.
The team has demonstrated that programmable light sources, such as computer screens and certain types of room lighting, can be coded with a small software program. For older lights, including many off-the-shelf lamps, a small computer chip about the size of a postage stamp can be attached to vary the brightness according to the secret code 2.
Source: engadget
While promising, the researchers acknowledge that rapid motion and strong sunlight can hinder the technique's efficacy. However, they remain optimistic about its utility in controlled environments such as conference rooms, television interviews, and lecture halls 1.
As the fight against misinformation intensifies, this innovative approach offers a new weapon in the arsenal of fact-checkers and investigators. However, Abe Davis cautions that the battle against deception is an ongoing arms race, with adversaries continually devising new methods to deceive 2.
The research will be presented at SIGGRAPH 2025 in Vancouver, British Columbia, marking a significant step forward in the ongoing effort to maintain the integrity of visual information in the age of AI and deepfakes 2.
Summarized by
Navi
[2]
NVIDIA CEO Jensen Huang confirms the development of the company's most advanced AI architecture, 'Rubin', with six new chips currently in trial production at TSMC.
2 Sources
Technology
17 hrs ago
2 Sources
Technology
17 hrs ago
Databricks, a leading data and AI company, is set to acquire machine learning startup Tecton to bolster its AI agent offerings. This strategic move aims to improve real-time data processing and expand Databricks' suite of AI tools for enterprise customers.
3 Sources
Technology
17 hrs ago
3 Sources
Technology
17 hrs ago
Google is providing free users of its Gemini app temporary access to the Veo 3 AI video generation tool, typically reserved for paying subscribers, for a limited time this weekend.
3 Sources
Technology
9 hrs ago
3 Sources
Technology
9 hrs ago
Broadcom's stock rises as the company capitalizes on the AI boom, driven by massive investments from tech giants in data infrastructure. The chipmaker faces both opportunities and challenges in this rapidly evolving landscape.
2 Sources
Technology
17 hrs ago
2 Sources
Technology
17 hrs ago
Apple is set to introduce new enterprise-focused AI tools, including ChatGPT configuration options and potential support for other AI providers, as part of its upcoming software updates.
2 Sources
Technology
17 hrs ago
2 Sources
Technology
17 hrs ago