Mixer: DNN Watermarking using Image Mixup
- URL: http://arxiv.org/abs/2212.02814v1
- Date: Tue, 6 Dec 2022 08:09:53 GMT
- Title: Mixer: DNN Watermarking using Image Mixup
- Authors: Kassem Kallas and Teddy Furon
- Abstract summary: This paper proposes a lightweight, reliable, and secure DNN watermarking that attempts to establish strong ties between these two tasks.
The samples triggering the watermarking task are generated using image Mixup either from training or testing samples.
- Score: 14.2215880080698
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: It is crucial to protect the intellectual property rights of DNN models prior
to their deployment. The DNN should perform two main tasks: its primary task
and watermarking task. This paper proposes a lightweight, reliable, and secure
DNN watermarking that attempts to establish strong ties between these two
tasks. The samples triggering the watermarking task are generated using image
Mixup either from training or testing samples. This means that there is an
infinity of triggers not limited to the samples used to embed the watermark in
the model at training. The extensive experiments on image classification models
for different datasets as well as exposing them to a variety of attacks, show
that the proposed watermarking provides protection with an adequate level of
security and robustness.
Related papers
- ChainMarks: Securing DNN Watermark with Cryptographic Chain [11.692176144467513]
Deep neural network (DNN) models are being used to protect the intellectual property of model owners.<n>Recent studies have shown that existing watermarking schemes are vulnerable to watermark removal and ambiguity attacks.<n>We propose ChainMarks, which generates secure and robust watermarks by introducing a cryptographic chain into the trigger inputs.
arXiv Detail & Related papers (2025-05-08T06:30:46Z) - Towards Dataset Copyright Evasion Attack against Personalized Text-to-Image Diffusion Models [52.877452505561706]
We propose the first copyright evasion attack specifically designed to undermine dataset ownership verification (DOV)<n>Our CEAT2I comprises three stages: watermarked sample detection, trigger identification, and efficient watermark mitigation.<n>Our experiments show that our CEAT2I effectively evades DOV mechanisms while preserving model performance.
arXiv Detail & Related papers (2025-05-05T17:51:55Z) - SleeperMark: Towards Robust Watermark against Fine-Tuning Text-to-image Diffusion Models [77.80595722480074]
SleeperMark is a framework designed to embed resilient watermarks into T2I diffusion models.
It guides the model to disentangle the watermark information from the semantic concepts it learns.
Our experiments demonstrate the effectiveness of SleeperMark across various types of diffusion models.
arXiv Detail & Related papers (2024-12-06T08:44:18Z) - FreeMark: A Non-Invasive White-Box Watermarking for Deep Neural Networks [5.937758152593733]
FreeMark is a novel framework for watermarking deep neural networks (DNNs)
Unlike traditional watermarking methods, FreeMark innovatively generates secret keys from a pre-generated watermark vector and the host model using gradient descent.
Experiments demonstrate that FreeMark effectively resists various watermark removal attacks while maintaining high watermark capacity.
arXiv Detail & Related papers (2024-09-16T05:05:03Z) - A self-supervised CNN for image watermark removal [102.94929746450902]
We propose a self-supervised convolutional neural network (CNN) in image watermark removal (SWCNN)
SWCNN uses a self-supervised way to construct reference watermarked images rather than given paired training samples, according to watermark distribution.
Taking into account texture information, a mixed loss is exploited to improve visual effects of image watermark removal.
arXiv Detail & Related papers (2024-03-09T05:59:48Z) - MEA-Defender: A Robust Watermark against Model Extraction Attack [19.421741149364017]
We propose a novel watermark to protect IP of DNN models against model extraction, named MEA-Defender.
We conduct extensive experiments on four model extraction attacks, using five datasets and six models trained based on supervised learning and self-supervised learning algorithms.
The experimental results demonstrate that MEA-Defender is highly robust against different model extraction attacks, and various watermark removal/detection approaches.
arXiv Detail & Related papers (2024-01-26T23:12:53Z) - ClearMark: Intuitive and Robust Model Watermarking via Transposed Model
Training [50.77001916246691]
This paper introduces ClearMark, the first DNN watermarking method designed for intuitive human assessment.
ClearMark embeds visible watermarks, enabling human decision-making without rigid value thresholds.
It shows an 8,544-bit watermark capacity comparable to the strongest existing work.
arXiv Detail & Related papers (2023-10-25T08:16:55Z) - Unbiased Watermark for Large Language Models [67.43415395591221]
This study examines how significantly watermarks impact the quality of model-generated outputs.
It is possible to integrate watermarks without affecting the output probability distribution.
The presence of watermarks does not compromise the performance of the model in downstream tasks.
arXiv Detail & Related papers (2023-09-22T12:46:38Z) - Safe and Robust Watermark Injection with a Single OoD Image [90.71804273115585]
Training a high-performance deep neural network requires large amounts of data and computational resources.
We propose a safe and robust backdoor-based watermark injection technique.
We induce random perturbation of model parameters during watermark injection to defend against common watermark removal attacks.
arXiv Detail & Related papers (2023-09-04T19:58:35Z) - On Function-Coupled Watermarks for Deep Neural Networks [15.478746926391146]
We propose a novel DNN watermarking solution that can effectively defend against watermark removal attacks.
Our key insight is to enhance the coupling of the watermark and model functionalities.
Results show a 100% watermark authentication success rate under aggressive watermark removal attacks.
arXiv Detail & Related papers (2023-02-08T05:55:16Z) - ROSE: A RObust and SEcure DNN Watermarking [14.2215880080698]
This paper proposes a lightweight, robust, and secure black-box DNN watermarking protocol.
It takes advantage of cryptographic one-way functions as well as the injection of in-task key image-label pairs during the training process.
arXiv Detail & Related papers (2022-06-22T12:46:14Z) - Exploring Structure Consistency for Deep Model Watermarking [122.38456787761497]
The intellectual property (IP) of Deep neural networks (DNNs) can be easily stolen'' by surrogate model attack.
We propose a new watermarking methodology, namely structure consistency'', based on which a new deep structure-aligned model watermarking algorithm is designed.
arXiv Detail & Related papers (2021-08-05T04:27:15Z) - Robust Black-box Watermarking for Deep NeuralNetwork using Inverse
Document Frequency [1.2502377311068757]
We propose a framework for watermarking a Deep Neural Networks (DNNs) model designed for a textual domain.
The proposed embedding procedure takes place in the model's training time, making the watermark verification stage straightforward.
The experimental results show that watermarked models have the same accuracy as the original ones.
arXiv Detail & Related papers (2021-03-09T17:56:04Z) - Fine-tuning Is Not Enough: A Simple yet Effective Watermark Removal
Attack for DNN Models [72.9364216776529]
We propose a novel watermark removal attack from a different perspective.
We design a simple yet powerful transformation algorithm by combining imperceptible pattern embedding and spatial-level transformations.
Our attack can bypass state-of-the-art watermarking solutions with very high success rates.
arXiv Detail & Related papers (2020-09-18T09:14:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.