On the Difficulty of Constructing a Robust and Publicly-Detectable Watermark
- URL: http://arxiv.org/abs/2502.04901v1
- Date: Fri, 07 Feb 2025 13:11:28 GMT
- Title: On the Difficulty of Constructing a Robust and Publicly-Detectable Watermark
- Authors: Jaiden Fairoze, Guillermo Ortiz-Jiménez, Mel Vecerik, Somesh Jha, Sven Gowal,
- Abstract summary: No existing scheme combines robustness, unforgeability, and public-detectability.
It is intractable to build certain components of our scheme without a leap in deep learning capabilities.
We propose research directions that need to be addressed before we can practically realize robust and publicly-verifiable provenance.
- Score: 31.42459678324617
- License:
- Abstract: This work investigates the theoretical boundaries of creating publicly-detectable schemes to enable the provenance of watermarked imagery. Metadata-based approaches like C2PA provide unforgeability and public-detectability. ML techniques offer robust retrieval and watermarking. However, no existing scheme combines robustness, unforgeability, and public-detectability. In this work, we formally define such a scheme and establish its existence. Although theoretically possible, we find that at present, it is intractable to build certain components of our scheme without a leap in deep learning capabilities. We analyze these limitations and propose research directions that need to be addressed before we can practically realize robust and publicly-verifiable provenance.
Related papers
- Let Watermarks Speak: A Robust and Unforgeable Watermark for Language Models [0.0]
We propose an undetectable, robust, single-bit watermarking scheme.
It has a comparable robustness to the most advanced zero-bit watermarking schemes.
arXiv Detail & Related papers (2024-12-27T11:58:05Z) - Black-Box Detection of Language Model Watermarks [1.9374282535132377]
We develop rigorous statistical tests to detect the presence of all three most popular watermarking scheme families using only a limited number of black-box queries.
Our findings indicate that current watermarking schemes are more detectable than previously believed, and that obscuring the fact that a watermark was deployed may not be a viable way for providers to protect against adversaries.
arXiv Detail & Related papers (2024-05-28T08:41:30Z) - XAI-Based Detection of Adversarial Attacks on Deepfake Detectors [0.0]
We introduce a novel methodology for identifying adversarial attacks on deepfake detectors using XAI.
Our approach contributes not only to the detection of deepfakes but also enhances the understanding of possible adversarial attacks.
arXiv Detail & Related papers (2024-03-05T13:25:30Z) - Publicly-Detectable Watermarking for Language Models [45.32236917886154]
We present a publicly-detectable watermarking scheme for LMs.
We embed a cryptographic signature into LM output using rejection sampling.
We prove that this produces unforgeable and distortion-free text output.
arXiv Detail & Related papers (2023-10-27T21:08:51Z) - An Unforgeable Publicly Verifiable Watermark for Large Language Models [84.2805275589553]
Current watermark detection algorithms require the secret key used in the watermark generation process, making them susceptible to security breaches and counterfeiting during public detection.
We propose an unforgeable publicly verifiable watermark algorithm named UPV that uses two different neural networks for watermark generation and detection, instead of using the same key at both stages.
arXiv Detail & Related papers (2023-07-30T13:43:27Z) - Reversible Quantization Index Modulation for Static Deep Neural Network
Watermarking [57.96787187733302]
Reversible data hiding (RDH) methods offer a potential solution, but existing approaches suffer from weaknesses in terms of usability, capacity, and fidelity.
We propose a novel RDH-based static DNN watermarking scheme using quantization index modulation (QIM)
Our scheme incorporates a novel approach based on a one-dimensional quantizer for watermark embedding.
arXiv Detail & Related papers (2023-05-29T04:39:17Z) - Did You Train on My Dataset? Towards Public Dataset Protection with
Clean-Label Backdoor Watermarking [54.40184736491652]
We propose a backdoor-based watermarking approach that serves as a general framework for safeguarding public-available data.
By inserting a small number of watermarking samples into the dataset, our approach enables the learning model to implicitly learn a secret function set by defenders.
This hidden function can then be used as a watermark to track down third-party models that use the dataset illegally.
arXiv Detail & Related papers (2023-03-20T21:54:30Z) - Exploring Structure Consistency for Deep Model Watermarking [122.38456787761497]
The intellectual property (IP) of Deep neural networks (DNNs) can be easily stolen'' by surrogate model attack.
We propose a new watermarking methodology, namely structure consistency'', based on which a new deep structure-aligned model watermarking algorithm is designed.
arXiv Detail & Related papers (2021-08-05T04:27:15Z) - Malware Traffic Classification: Evaluation of Algorithms and an
Automated Ground-truth Generation Pipeline [8.779666771357029]
We propose an automated packet data-labeling pipeline to generate ground-truth data.
We explore and test different kind of clustering approaches which make use of unique and diverse set of features extracted from this observable meta-data.
arXiv Detail & Related papers (2020-10-22T11:48:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.