Improving Adversarial Attacks on Latent Diffusion Model
- URL: http://arxiv.org/abs/2310.04687v3
- Date: Wed, 6 Mar 2024 18:14:58 GMT
- Title: Improving Adversarial Attacks on Latent Diffusion Model
- Authors: Boyang Zheng, Chumeng Liang, Xiaoyu Wu, Yan Liu
- Abstract summary: Adrial attacks on Latent Diffusion Model (LDM) are effective protection against malicious finetuning of LDM on unauthorized images.
We show that these attacks add an extra error to the score function of adversarial examples predicted by LDM.
We propose to improve the adversarial attack on LDM by Attacking with Consistent score-function Errors.
- Score: 8.268827963476317
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Adversarial attacks on Latent Diffusion Model (LDM), the state-of-the-art
image generative model, have been adopted as effective protection against
malicious finetuning of LDM on unauthorized images. We show that these attacks
add an extra error to the score function of adversarial examples predicted by
LDM. LDM finetuned on these adversarial examples learns to lower the error by a
bias, from which the model is attacked and predicts the score function with
biases.
Based on the dynamics, we propose to improve the adversarial attack on LDM by
Attacking with Consistent score-function Errors (ACE). ACE unifies the pattern
of the extra error added to the predicted score function. This induces the
finetuned LDM to learn the same pattern as a bias in predicting the score
function. We then introduce a well-crafted pattern to improve the attack. Our
method outperforms state-of-the-art methods in adversarial attacks on LDM.
Related papers
- An h-space Based Adversarial Attack for Protection Against Few-shot Personalization [5.357486699062561]
We propose a novel anti-customization approach, called HAAD, that leverages adversarial attacks to craft perturbations based on the h-space.<n>We introduce a more efficient variant, HAAD-KV, that constructs perturbations solely based on the KV parameters of the h-space.<n>Despite their simplicity, our methods outperform state-of-the-art adversarial attacks, highlighting their effectiveness.
arXiv Detail & Related papers (2025-07-23T14:43:22Z) - DiffusionGuard: A Robust Defense Against Malicious Diffusion-based Image Editing [93.45507533317405]
DiffusionGuard is a robust and effective defense method against unauthorized edits by diffusion-based image editing models.
We introduce a novel objective that generates adversarial noise targeting the early stage of the diffusion process.
We also introduce a mask-augmentation technique to enhance robustness against various masks during test time.
arXiv Detail & Related papers (2024-10-08T05:19:19Z) - Watch the Watcher! Backdoor Attacks on Security-Enhancing Diffusion Models [65.30406788716104]
This work investigates the vulnerabilities of security-enhancing diffusion models.
We demonstrate that these models are highly susceptible to DIFF2, a simple yet effective backdoor attack.
Case studies show that DIFF2 can significantly reduce both post-purification and certified accuracy across benchmark datasets and models.
arXiv Detail & Related papers (2024-06-14T02:39:43Z) - Pixel is a Barrier: Diffusion Models Are More Adversarially Robust Than We Think [14.583181596370386]
Adversarial examples for diffusion models are widely used as solutions for safety concerns.
This may mislead us to think that the diffusion models are vulnerable to adversarial attacks like most deep models.
In this paper, we show novel findings that: even though gradient-based white-box attacks can be used to attack the LDMs, they fail to attack PDMs.
arXiv Detail & Related papers (2024-04-20T08:28:43Z) - Impart: An Imperceptible and Effective Label-Specific Backdoor Attack [15.859650783567103]
We propose a novel imperceptible backdoor attack framework, named Impart, in the scenario where the attacker has no access to the victim model.
Specifically, in order to enhance the attack capability of the all-to-all setting, we first propose a label-specific attack.
arXiv Detail & Related papers (2024-03-18T07:22:56Z) - Revealing Vulnerabilities in Stable Diffusion via Targeted Attacks [41.531913152661296]
We formulate the problem of targeted adversarial attack on Stable Diffusion and propose a framework to generate adversarial prompts.
Specifically, we design a gradient-based embedding optimization method to craft reliable adversarial prompts that guide stable diffusion to generate specific images.
After obtaining successful adversarial prompts, we reveal the mechanisms that cause the vulnerability of the model.
arXiv Detail & Related papers (2024-01-16T12:15:39Z) - Mutual-modality Adversarial Attack with Semantic Perturbation [81.66172089175346]
We propose a novel approach that generates adversarial attacks in a mutual-modality optimization scheme.
Our approach outperforms state-of-the-art attack methods and can be readily deployed as a plug-and-play solution.
arXiv Detail & Related papers (2023-12-20T05:06:01Z) - DiffProtect: Generate Adversarial Examples with Diffusion Models for
Facial Privacy Protection [64.77548539959501]
DiffProtect produces more natural-looking encrypted images than state-of-the-art methods.
It achieves significantly higher attack success rates, e.g., 24.5% and 25.1% absolute improvements on the CelebA-HQ and FFHQ datasets.
arXiv Detail & Related papers (2023-05-23T02:45:49Z) - Diffusion Models for Imperceptible and Transferable Adversarial Attack [23.991194050494396]
We propose a novel imperceptible and transferable attack by leveraging both the generative and discriminative power of diffusion models.
Our proposed method, DiffAttack, is the first that introduces diffusion models into the adversarial attack field.
arXiv Detail & Related papers (2023-05-14T16:02:36Z) - Guided Adversarial Attack for Evaluating and Enhancing Adversarial
Defenses [59.58128343334556]
We introduce a relaxation term to the standard loss, that finds more suitable gradient-directions, increases attack efficacy and leads to more efficient adversarial training.
We propose Guided Adversarial Margin Attack (GAMA), which utilizes function mapping of the clean image to guide the generation of adversaries.
We also propose Guided Adversarial Training (GAT), which achieves state-of-the-art performance amongst single-step defenses.
arXiv Detail & Related papers (2020-11-30T16:39:39Z) - Deflecting Adversarial Attacks [94.85315681223702]
We present a new approach towards ending this cycle where we "deflect" adversarial attacks by causing the attacker to produce an input that resembles the attack's target class.
We first propose a stronger defense based on Capsule Networks that combines three detection mechanisms to achieve state-of-the-art detection performance.
arXiv Detail & Related papers (2020-02-18T06:59:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.