Investigating Deep Watermark Security: An Adversarial Transferability
Perspective
- URL: http://arxiv.org/abs/2402.16397v1
- Date: Mon, 26 Feb 2024 08:41:14 GMT
- Title: Investigating Deep Watermark Security: An Adversarial Transferability
Perspective
- Authors: Biqing Qi, Junqi Gao, Yiang Luo, Jianxing Liu, Ligang Wu and Bowen
Zhou
- Abstract summary: This study introduces two effective transferable attackers to assess the vulnerability of deep watermarks against erasure and tampering risks.
We propose the Easy Sample Selection (ESS) mechanism and the Easy Sample Matching Attack (ESMA) method.
Experiments show a significant enhancement in the success rate of targeted transfer attacks for both ESMA and BEM-ESMA methods.
- Score: 18.363276470822427
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The rise of generative neural networks has triggered an increased demand for
intellectual property (IP) protection in generated content. Deep watermarking
techniques, recognized for their flexibility in IP protection, have garnered
significant attention. However, the surge in adversarial transferable attacks
poses unprecedented challenges to the security of deep watermarking
techniques-an area currently lacking systematic investigation. This study fills
this gap by introducing two effective transferable attackers to assess the
vulnerability of deep watermarks against erasure and tampering risks.
Specifically, we initially define the concept of local sample density,
utilizing it to deduce theorems on the consistency of model outputs. Upon
discovering that perturbing samples towards high sample density regions (HSDR)
of the target class enhances targeted adversarial transferability, we propose
the Easy Sample Selection (ESS) mechanism and the Easy Sample Matching Attack
(ESMA) method. Additionally, we propose the Bottleneck Enhanced Mixup (BEM)
that integrates information bottleneck theory to reduce the generator's
dependence on irrelevant noise. Experiments show a significant enhancement in
the success rate of targeted transfer attacks for both ESMA and BEM-ESMA
methods. We further conduct a comprehensive evaluation using ESMA and BEM-ESMA
as measurements, considering model architecture and watermark encoding length,
and achieve some impressive findings.
Related papers
- Attention-GAN for Anomaly Detection: A Cutting-Edge Approach to
Cybersecurity Threat Management [0.0]
This paper proposes an innovative Attention-GAN framework for enhancing cybersecurity, focusing on anomaly detection.
The proposed approach aims to generate diverse and realistic synthetic attack scenarios, thereby enriching the dataset and improving threat identification.
Integrating attention mechanisms with Generative Adversarial Networks (GANs) is a key feature of the proposed method.
The attention-GAN framework has emerged as a pioneering approach, setting a new benchmark for advanced cyber-defense strategies.
arXiv Detail & Related papers (2024-02-25T01:10:55Z) - Model Stealing Attack against Graph Classification with Authenticity,
Uncertainty and Diversity [85.1927483219819]
GNNs are vulnerable to the model stealing attack, a nefarious endeavor geared towards duplicating the target model via query permissions.
We introduce three model stealing attacks to adapt to different actual scenarios.
arXiv Detail & Related papers (2023-12-18T05:42:31Z) - Contrastive Pseudo Learning for Open-World DeepFake Attribution [67.58954345538547]
We introduce a new benchmark called Open-World DeepFake (OW-DFA), which aims to evaluate attribution performance against various types of fake faces under open-world scenarios.
We propose a novel framework named Contrastive Pseudo Learning (CPL) for the OW-DFA task through 1) introducing a Global-Local Voting module to guide the feature alignment of forged faces with different manipulated regions, 2) designing a Confidence-based Soft Pseudo-label strategy to mitigate the pseudo-noise caused by similar methods in unlabeled set.
arXiv Detail & Related papers (2023-09-20T08:29:22Z) - Safe and Robust Watermark Injection with a Single OoD Image [90.71804273115585]
Training a high-performance deep neural network requires large amounts of data and computational resources.
We propose a safe and robust backdoor-based watermark injection technique.
We induce random perturbation of model parameters during watermark injection to defend against common watermark removal attacks.
arXiv Detail & Related papers (2023-09-04T19:58:35Z) - Small Object Detection via Coarse-to-fine Proposal Generation and
Imitation Learning [52.06176253457522]
We propose a two-stage framework tailored for small object detection based on the Coarse-to-fine pipeline and Feature Imitation learning.
CFINet achieves state-of-the-art performance on the large-scale small object detection benchmarks, SODA-D and SODA-A.
arXiv Detail & Related papers (2023-08-18T13:13:09Z) - Towards Adversarial Realism and Robust Learning for IoT Intrusion
Detection and Classification [0.0]
The Internet of Things (IoT) faces tremendous security challenges.
The increasing threat posed by adversarial attacks restates the need for reliable defense strategies.
This work describes the types of constraints required for an adversarial cyber-attack example to be realistic.
arXiv Detail & Related papers (2023-01-30T18:00:28Z) - Generative Adversarial Network-Driven Detection of Adversarial Tasks in
Mobile Crowdsensing [5.675436513661266]
Crowdsensing systems are vulnerable to various attacks as they build on non-dedicated and ubiquitous properties.
Previous works suggest that GAN-based attacks exhibit more crucial devastation than empirically designed attack samples.
This paper aims to detect intelligently designed illegitimate sensing service requests by integrating a GAN-based model.
arXiv Detail & Related papers (2022-02-16T00:23:25Z) - Exploring Structure Consistency for Deep Model Watermarking [122.38456787761497]
The intellectual property (IP) of Deep neural networks (DNNs) can be easily stolen'' by surrogate model attack.
We propose a new watermarking methodology, namely structure consistency'', based on which a new deep structure-aligned model watermarking algorithm is designed.
arXiv Detail & Related papers (2021-08-05T04:27:15Z) - Deep Spatial Gradient and Temporal Depth Learning for Face Anti-spoofing [61.82466976737915]
Depth supervised learning has been proven as one of the most effective methods for face anti-spoofing.
We propose a new approach to detect presentation attacks from multiple frames based on two insights.
The proposed approach achieves state-of-the-art results on five benchmark datasets.
arXiv Detail & Related papers (2020-03-18T06:11:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.