Beyond Invisibility: Learning Robust Visible Watermarks for Stronger Copyright Protection
- URL: http://arxiv.org/abs/2506.02665v1
- Date: Tue, 03 Jun 2025 09:14:46 GMT
- Title: Beyond Invisibility: Learning Robust Visible Watermarks for Stronger Copyright Protection
- Authors: Tianci Liu, Tong Yang, Quan Zhang, Qi Lei,
- Abstract summary: copyrighted content faces growing risk of unauthorized use as AI advances.<n>Recent works developed copyright protections against specific AI techniques such as unauthorized personalization through DreamBooth that are misused.<n>We propose a universal approach that embeds textitvisible watermarks that are textithard-to-remove into images.
- Score: 21.720886840742015
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As AI advances, copyrighted content faces growing risk of unauthorized use, whether through model training or direct misuse. Building upon invisible adversarial perturbation, recent works developed copyright protections against specific AI techniques such as unauthorized personalization through DreamBooth that are misused. However, these methods offer only short-term security, as they require retraining whenever the underlying model architectures change. To establish long-term protection aiming at better robustness, we go beyond invisible perturbation, and propose a universal approach that embeds \textit{visible} watermarks that are \textit{hard-to-remove} into images. Grounded in a new probabilistic and inverse problem-based formulation, our framework maximizes the discrepancy between the \textit{optimal} reconstruction and the original content. We develop an effective and efficient approximation algorithm to circumvent a intractable bi-level optimization. Experimental results demonstrate superiority of our approach across diverse scenarios.
Related papers
- Image Can Bring Your Memory Back: A Novel Multi-Modal Guided Attack against Image Generation Model Unlearning [28.15997901023315]
Recall is a novel adversarial framework designed to compromise the robustness of unlearned IGMs.<n>It consistently outperforms existing baselines in terms of adversarial effectiveness, computational efficiency, and semantic fidelity with the original prompt.<n>These findings reveal critical vulnerabilities in current unlearning mechanisms and underscore the need for more robust solutions.
arXiv Detail & Related papers (2025-07-09T02:59:01Z) - Optimization-Free Universal Watermark Forgery with Regenerative Diffusion Models [50.73220224678009]
Watermarking can be used to verify the origin of synthetic images generated by artificial intelligence models.<n>Recent studies demonstrate the capability to forge watermarks from a target image onto cover images via adversarial techniques.<n>In this paper, we uncover a greater risk of an optimization-free and universal watermark forgery.<n>Our approach significantly broadens the scope of attacks, presenting a greater challenge to the security of current watermarking techniques.
arXiv Detail & Related papers (2025-06-06T12:08:02Z) - MISLEADER: Defending against Model Extraction with Ensembles of Distilled Models [56.09354775405601]
Model extraction attacks aim to replicate the functionality of a black-box model through query access.<n>Most existing defenses presume that attacker queries have out-of-distribution (OOD) samples, enabling them to detect and disrupt suspicious inputs.<n>We propose MISLEADER, a novel defense strategy that does not rely on OOD assumptions.
arXiv Detail & Related papers (2025-06-03T01:37:09Z) - Safe-Sora: Safe Text-to-Video Generation via Graphical Watermarking [53.434260110195446]
Safe-Sora is the first framework to embed graphical watermarks directly into the video generation process.<n>We develop a 3D wavelet transform-enhanced Mamba architecture with a adaptive localtemporal scanning strategy.<n>Experiments demonstrate Safe-Sora achieves state-of-the-art performance in terms of video quality, watermark fidelity, and robustness.
arXiv Detail & Related papers (2025-05-19T03:31:31Z) - CopyJudge: Automated Copyright Infringement Identification and Mitigation in Text-to-Image Diffusion Models [58.58208005178676]
We propose CopyJudge, an automated copyright infringement identification framework.<n>We employ an abstraction-filtration-comparison test framework with multi-LVLM debate to assess the likelihood of infringement.<n>Based on the judgments, we introduce a general LVLM-based mitigation strategy.
arXiv Detail & Related papers (2025-02-21T08:09:07Z) - OmniGuard: Hybrid Manipulation Localization via Augmented Versatile Deep Image Watermarking [20.662260046296897]
Existing versatile watermarking approaches suffer from trade-offs between tamper localization precision and visual quality.<n>We propose OmniGuard, a novel augmented versatile watermarking approach that integrates proactive embedding with passive, blind extraction.<n>Our method outperforms it by 4.25dB in PSNR of the container image, 20.7% in F1-Score under noisy conditions, and 14.8% in average bit accuracy.
arXiv Detail & Related papers (2024-12-02T15:38:44Z) - EnTruth: Enhancing the Traceability of Unauthorized Dataset Usage in Text-to-image Diffusion Models with Minimal and Robust Alterations [73.94175015918059]
We introduce a novel approach, EnTruth, which Enhances Traceability of unauthorized dataset usage.
By strategically incorporating the template memorization, EnTruth can trigger the specific behavior in unauthorized models as the evidence of infringement.
Our method is the first to investigate the positive application of memorization and use it for copyright protection, which turns a curse into a blessing.
arXiv Detail & Related papers (2024-06-20T02:02:44Z) - Improving Adversarial Robustness via Decoupled Visual Representation Masking [65.73203518658224]
In this paper, we highlight two novel properties of robust features from the feature distribution perspective.
We find that state-of-the-art defense methods aim to address both of these mentioned issues well.
Specifically, we propose a simple but effective defense based on decoupled visual representation masking.
arXiv Detail & Related papers (2024-06-16T13:29:41Z) - Protect-Your-IP: Scalable Source-Tracing and Attribution against Personalized Generation [19.250673262185767]
We propose a unified approach for image copyright source-tracing and attribution.
We introduce an innovative watermarking-attribution method that blends proactive and passive strategies.
We have conducted experiments using various celebrity portrait series sourced online.
arXiv Detail & Related papers (2024-05-26T15:14:54Z) - Reliable Model Watermarking: Defending Against Theft without Compromising on Evasion [15.086451828825398]
evasion adversaries can readily exploit the shortcuts created by models memorizing watermark samples.
By learning the model to accurately recognize them, unique watermark behaviors are promoted through knowledge injection.
arXiv Detail & Related papers (2024-04-21T03:38:20Z) - SimAC: A Simple Anti-Customization Method for Protecting Face Privacy against Text-to-Image Synthesis of Diffusion Models [16.505593270720034]
We propose an adaptive greedy search for optimal time steps that seamlessly integrates with existing anti-customization methods.
Our approach significantly increases identity disruption, thereby protecting user privacy and copyright.
arXiv Detail & Related papers (2023-12-13T03:04:22Z) - Exploring Structure Consistency for Deep Model Watermarking [122.38456787761497]
The intellectual property (IP) of Deep neural networks (DNNs) can be easily stolen'' by surrogate model attack.
We propose a new watermarking methodology, namely structure consistency'', based on which a new deep structure-aligned model watermarking algorithm is designed.
arXiv Detail & Related papers (2021-08-05T04:27:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.