Investigating Deep Watermark Security: An Adversarial Transferability
Perspective
- URL: http://arxiv.org/abs/2402.16397v1
- Date: Mon, 26 Feb 2024 08:41:14 GMT
- Title: Investigating Deep Watermark Security: An Adversarial Transferability
Perspective
- Authors: Biqing Qi, Junqi Gao, Yiang Luo, Jianxing Liu, Ligang Wu and Bowen
Zhou
- Abstract summary: This study introduces two effective transferable attackers to assess the vulnerability of deep watermarks against erasure and tampering risks.
We propose the Easy Sample Selection (ESS) mechanism and the Easy Sample Matching Attack (ESMA) method.
Experiments show a significant enhancement in the success rate of targeted transfer attacks for both ESMA and BEM-ESMA methods.
- Score: 18.363276470822427
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The rise of generative neural networks has triggered an increased demand for
intellectual property (IP) protection in generated content. Deep watermarking
techniques, recognized for their flexibility in IP protection, have garnered
significant attention. However, the surge in adversarial transferable attacks
poses unprecedented challenges to the security of deep watermarking
techniques-an area currently lacking systematic investigation. This study fills
this gap by introducing two effective transferable attackers to assess the
vulnerability of deep watermarks against erasure and tampering risks.
Specifically, we initially define the concept of local sample density,
utilizing it to deduce theorems on the consistency of model outputs. Upon
discovering that perturbing samples towards high sample density regions (HSDR)
of the target class enhances targeted adversarial transferability, we propose
the Easy Sample Selection (ESS) mechanism and the Easy Sample Matching Attack
(ESMA) method. Additionally, we propose the Bottleneck Enhanced Mixup (BEM)
that integrates information bottleneck theory to reduce the generator's
dependence on irrelevant noise. Experiments show a significant enhancement in
the success rate of targeted transfer attacks for both ESMA and BEM-ESMA
methods. We further conduct a comprehensive evaluation using ESMA and BEM-ESMA
as measurements, considering model architecture and watermark encoding length,
and achieve some impressive findings.
Related papers
- Transferable Adversarial Attacks on SAM and Its Downstream Models [87.23908485521439]
This paper explores the feasibility of adversarial attacking various downstream models fine-tuned from the segment anything model (SAM)
To enhance the effectiveness of the adversarial attack towards models fine-tuned on unknown datasets, we propose a universal meta-initialization (UMI) algorithm.
arXiv Detail & Related papers (2024-10-26T15:04:04Z) - DIP-Watermark: A Double Identity Protection Method Based on Robust Adversarial Watermark [13.007649270429493]
Face Recognition (FR) systems pose privacy risks.
One countermeasure is adversarial attack, deceiving unauthorized malicious FR.
We propose the first double identity protection scheme based on traceable adversarial watermarking.
arXiv Detail & Related papers (2024-04-23T02:50:38Z) - Model Stealing Attack against Graph Classification with Authenticity, Uncertainty and Diversity [80.16488817177182]
GNNs are vulnerable to the model stealing attack, a nefarious endeavor geared towards duplicating the target model via query permissions.
We introduce three model stealing attacks to adapt to different actual scenarios.
arXiv Detail & Related papers (2023-12-18T05:42:31Z) - Tailoring Adversarial Attacks on Deep Neural Networks for Targeted Class Manipulation Using DeepFool Algorithm [6.515472477685614]
The susceptibility of deep neural networks (DNNs) to adversarial attacks undermines their reliability across numerous applications.
We introduce the Enhanced Targeted DeepFool (ET DeepFool) algorithm, an evolution of DeepFool.
Our empirical investigations demonstrate the superiority of this refined approach in maintaining the integrity of images.
arXiv Detail & Related papers (2023-10-18T18:50:39Z) - Safe and Robust Watermark Injection with a Single OoD Image [90.71804273115585]
Training a high-performance deep neural network requires large amounts of data and computational resources.
We propose a safe and robust backdoor-based watermark injection technique.
We induce random perturbation of model parameters during watermark injection to defend against common watermark removal attacks.
arXiv Detail & Related papers (2023-09-04T19:58:35Z) - Small Object Detection via Coarse-to-fine Proposal Generation and
Imitation Learning [52.06176253457522]
We propose a two-stage framework tailored for small object detection based on the Coarse-to-fine pipeline and Feature Imitation learning.
CFINet achieves state-of-the-art performance on the large-scale small object detection benchmarks, SODA-D and SODA-A.
arXiv Detail & Related papers (2023-08-18T13:13:09Z) - Generative Adversarial Network-Driven Detection of Adversarial Tasks in
Mobile Crowdsensing [5.675436513661266]
Crowdsensing systems are vulnerable to various attacks as they build on non-dedicated and ubiquitous properties.
Previous works suggest that GAN-based attacks exhibit more crucial devastation than empirically designed attack samples.
This paper aims to detect intelligently designed illegitimate sensing service requests by integrating a GAN-based model.
arXiv Detail & Related papers (2022-02-16T00:23:25Z) - Exploring Structure Consistency for Deep Model Watermarking [122.38456787761497]
The intellectual property (IP) of Deep neural networks (DNNs) can be easily stolen'' by surrogate model attack.
We propose a new watermarking methodology, namely structure consistency'', based on which a new deep structure-aligned model watermarking algorithm is designed.
arXiv Detail & Related papers (2021-08-05T04:27:15Z) - Deep Spatial Gradient and Temporal Depth Learning for Face Anti-spoofing [61.82466976737915]
Depth supervised learning has been proven as one of the most effective methods for face anti-spoofing.
We propose a new approach to detect presentation attacks from multiple frames based on two insights.
The proposed approach achieves state-of-the-art results on five benchmark datasets.
arXiv Detail & Related papers (2020-03-18T06:11:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.