Robust Watermarking for Video Forgery Detection with Improved
Imperceptibility and Robustness
- URL: http://arxiv.org/abs/2207.03409v1
- Date: Thu, 7 Jul 2022 16:27:10 GMT
- Title: Robust Watermarking for Video Forgery Detection with Improved
Imperceptibility and Robustness
- Authors: Yangming Zhou, Qichao Ying, Xiangyu Zhang, Zhenxing Qian, Sheng Li and
Xinpeng Zhang
- Abstract summary: This paper proposes a video watermarking network for tampering localization.
We jointly train a 3D-UNet-based watermark embedding network and a decoder that predicts the tampering mask.
Experimental results demonstrate that our method generates watermarked videos with good imperceptibility and robustly and accurately locates tampered areas.
- Score: 30.611167333725408
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Videos are prone to tampering attacks that alter the meaning and deceive the
audience. Previous video forgery detection schemes find tiny clues to locate
the tampered areas. However, attackers can successfully evade supervision by
destroying such clues using video compression or blurring. This paper proposes
a video watermarking network for tampering localization. We jointly train a
3D-UNet-based watermark embedding network and a decoder that predicts the
tampering mask. The perturbation made by watermark embedding is close to
imperceptible. Considering that there is no off-the-shelf differentiable video
codec simulator, we propose to mimic video compression by ensembling simulation
results of other typical attacks, e.g., JPEG compression and blurring, as an
approximation. Experimental results demonstrate that our method generates
watermarked videos with good imperceptibility and robustly and accurately
locates tampered areas within the attacked version.
Related papers
- VideoMarkBench: Benchmarking Robustness of Video Watermarking [34.184333776307504]
We introduce VideoMarkBench, the first systematic benchmark designed to evaluate the robustness of video watermarks under watermark removal and forgery attacks.<n>Our study encompasses a unified dataset generated by three state-of-the-art video generative models, across three video styles, incorporating four watermarking methods and seven aggregation strategies used during detection.<n>Our findings reveal significant vulnerabilities in current watermarking approaches and highlight the urgent need for more robust solutions.
arXiv Detail & Related papers (2025-05-27T18:00:03Z) - Safe-Sora: Safe Text-to-Video Generation via Graphical Watermarking [53.434260110195446]
Safe-Sora is the first framework to embed graphical watermarks directly into the video generation process.<n>We develop a 3D wavelet transform-enhanced Mamba architecture with a adaptive localtemporal scanning strategy.<n>Experiments demonstrate Safe-Sora achieves state-of-the-art performance in terms of video quality, watermark fidelity, and robustness.
arXiv Detail & Related papers (2025-05-19T03:31:31Z) - Adversarial Shallow Watermarking [33.580351668272215]
We propose a novel watermarking framework to resist unknown distortions, namely Adversarial Shallow Watermarking (ASW)
ASW utilizes only a shallow decoder that is randomly parameterized and designed to be insensitive to distortions for watermarking extraction.
ASW achieves comparable results on known distortions and better robustness on unknown distortions.
arXiv Detail & Related papers (2025-04-28T07:12:20Z) - VideoMark: A Distortion-Free Robust Watermarking Framework for Video Diffusion Models [18.043141353517317]
VideoMark is a training-free robust watermarking framework for video diffusion models.
Our method generates an extended watermark message sequence and randomly selects starting positions for each video.
Our watermark remains undetectable to attackers without the secret key, ensuring strong imperceptibility compared to other watermarking frameworks.
arXiv Detail & Related papers (2025-04-23T02:21:12Z) - Video Seal: Open and Efficient Video Watermarking [47.40833588157406]
Video watermarking addresses challenges by embedding imperceptible signals into videos, allowing for identification.
Video Seal is a comprehensive framework for neural video watermarking and a competitive open-sourced model.
We present experimental results demonstrating the effectiveness of the approach in terms of speed, imperceptibility, and robustness.
arXiv Detail & Related papers (2024-12-12T17:41:49Z) - LVMark: Robust Watermark for Latent Video Diffusion Models [13.85241328100336]
We introduce LVMark, a novel watermarking method for video diffusion models.
We propose a new watermark decoder tailored for generated videos by learning the consistency between adjacent frames.
We optimize both the watermark decoder and the latent decoder of diffusion model, effectively balancing the trade-off between visual quality and bit accuracy.
arXiv Detail & Related papers (2024-12-12T09:57:20Z) - SLIC: Secure Learned Image Codec through Compressed Domain Watermarking to Defend Image Manipulation [0.9208007322096533]
This paper introduces the Secure Learned Image Codec (SLIC), a novel active approach to ensuring image authenticity.
SLIC embeds watermarks as adversarial perturbations in the latent space, creating images that degrade in quality upon re-compression if tampered with.
Our method involves fine-tuning a neural encoder/decoder to balance watermark invisibility with robustness, ensuring minimal quality loss for non-watermarked images.
arXiv Detail & Related papers (2024-10-19T11:42:36Z) - Social Media Authentication and Combating Deepfakes using Semi-fragile Invisible Image Watermarking [6.246098300155482]
We propose a semi-fragile image watermarking technique that embeds an invisible secret message into real images for media authentication.
Our proposed framework is designed to be fragile to facial manipulations or tampering while being robust to benign image-processing operations and watermark removal attacks.
arXiv Detail & Related papers (2024-10-02T18:05:03Z) - Towards Robust Model Watermark via Reducing Parametric Vulnerability [57.66709830576457]
backdoor-based ownership verification becomes popular recently, in which the model owner can watermark the model.
We propose a mini-max formulation to find these watermark-removed models and recover their watermark behavior.
Our method improves the robustness of the model watermarking against parametric changes and numerous watermark-removal attacks.
arXiv Detail & Related papers (2023-09-09T12:46:08Z) - Invisible Image Watermarks Are Provably Removable Using Generative AI [47.25747266531665]
Invisible watermarks safeguard images' copyrights by embedding hidden messages only detectable by owners.
We propose a family of regeneration attacks to remove these invisible watermarks.
The proposed attack method first adds random noise to an image to destroy the watermark and then reconstructs the image.
arXiv Detail & Related papers (2023-06-02T23:29:28Z) - Tree-Ring Watermarks: Fingerprints for Diffusion Images that are
Invisible and Robust [55.91987293510401]
Watermarking the outputs of generative models is a crucial technique for tracing copyright and preventing potential harm from AI-generated content.
We introduce a novel technique called Tree-Ring Watermarking that robustly fingerprints diffusion model outputs.
Our watermark is semantically hidden in the image space and is far more robust than watermarking alternatives that are currently deployed.
arXiv Detail & Related papers (2023-05-31T17:00:31Z) - Detecting Deepfake by Creating Spatio-Temporal Regularity Disruption [94.5031244215761]
We propose to boost the generalization of deepfake detection by distinguishing the "regularity disruption" that does not appear in real videos.
Specifically, by carefully examining the spatial and temporal properties, we propose to disrupt a real video through a Pseudo-fake Generator.
Such practice allows us to achieve deepfake detection without using fake videos and improves the generalization ability in a simple and efficient manner.
arXiv Detail & Related papers (2022-07-21T10:42:34Z) - Certified Neural Network Watermarks with Randomized Smoothing [64.86178395240469]
We propose a certifiable watermarking method for deep learning models.
We show that our watermark is guaranteed to be unremovable unless the model parameters are changed by more than a certain l2 threshold.
Our watermark is also empirically more robust compared to previous watermarking methods.
arXiv Detail & Related papers (2022-07-16T16:06:59Z) - FaceSigns: Semi-Fragile Neural Watermarks for Media Authentication and
Countering Deepfakes [25.277040616599336]
Deepfakes and manipulated media are becoming a prominent threat due to the recent advances in realistic image and video synthesis techniques.
We introduce a deep learning based semi-fragile watermarking technique that allows media authentication by verifying an invisible secret message embedded in the image pixels.
arXiv Detail & Related papers (2022-04-05T03:29:30Z) - A Robust Document Image Watermarking Scheme using Deep Neural Network [10.938878993948517]
This paper proposes an end-to-end document image watermarking scheme using the deep neural network.
Specifically, an encoder and a decoder are designed to embed and extract the watermark.
A text-sensitive loss function is designed to limit the embedding modification on characters.
arXiv Detail & Related papers (2022-02-26T05:28:52Z) - Fine-tuning Is Not Enough: A Simple yet Effective Watermark Removal
Attack for DNN Models [72.9364216776529]
We propose a novel watermark removal attack from a different perspective.
We design a simple yet powerful transformation algorithm by combining imperceptible pattern embedding and spatial-level transformations.
Our attack can bypass state-of-the-art watermarking solutions with very high success rates.
arXiv Detail & Related papers (2020-09-18T09:14:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.