CausalDiff: Causality-Inspired Disentanglement via Diffusion Model for Adversarial Defense
- URL: http://arxiv.org/abs/2410.23091v1
- Date: Wed, 30 Oct 2024 15:06:44 GMT
- Title: CausalDiff: Causality-Inspired Disentanglement via Diffusion Model for Adversarial Defense
- Authors: Mingkun Zhang, Keping Bi, Wei Chen, Quanrun Chen, Jiafeng Guo, Xueqi Cheng,
- Abstract summary: Humans are difficult to be cheated by subtle manipulations, since we make judgments only based on essential factors.
Inspired by this observation, we attempt to model label generation with essential label-causative factors and incorporate label-non-causative factors to assist data generation.
For an adversarial example, we aim to discriminate perturbations as non-causative factors and make predictions only based on the label-causative factors.
- Score: 61.78357530675446
- License:
- Abstract: Despite ongoing efforts to defend neural classifiers from adversarial attacks, they remain vulnerable, especially to unseen attacks. In contrast, humans are difficult to be cheated by subtle manipulations, since we make judgments only based on essential factors. Inspired by this observation, we attempt to model label generation with essential label-causative factors and incorporate label-non-causative factors to assist data generation. For an adversarial example, we aim to discriminate the perturbations as non-causative factors and make predictions only based on the label-causative factors. Concretely, we propose a casual diffusion model (CausalDiff) that adapts diffusion models for conditional data generation and disentangles the two types of casual factors by learning towards a novel casual information bottleneck objective. Empirically, CausalDiff has significantly outperformed state-of-the-art defense methods on various unseen attacks, achieving an average robustness of 86.39% (+4.01%) on CIFAR-10, 56.25% (+3.13%) on CIFAR-100, and 82.62% (+4.93%) on GTSRB (German Traffic Sign Recognition Benchmark).
Related papers
- Unraveling Adversarial Examples against Speaker Identification --
Techniques for Attack Detection and Victim Model Classification [24.501269108193412]
Adversarial examples have proven to threaten speaker identification systems.
We propose a method to detect the presence of adversarial examples.
We also introduce a method for identifying the victim model on which the adversarial attack is carried out.
arXiv Detail & Related papers (2024-02-29T17:06:52Z) - Mitigating Feature Gap for Adversarial Robustness by Feature
Disentanglement [61.048842737581865]
Adversarial fine-tuning methods aim to enhance adversarial robustness through fine-tuning the naturally pre-trained model in an adversarial training manner.
We propose a disentanglement-based approach to explicitly model and remove the latent features that cause the feature gap.
Empirical evaluations on three benchmark datasets demonstrate that our approach surpasses existing adversarial fine-tuning methods and adversarial training baselines.
arXiv Detail & Related papers (2024-01-26T08:38:57Z) - FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning
Attacks in Federated Learning [98.43475653490219]
Federated learning (FL) is susceptible to poisoning attacks.
FreqFed is a novel aggregation mechanism that transforms the model updates into the frequency domain.
We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.
arXiv Detail & Related papers (2023-12-07T16:56:24Z) - Adversarial defense based on distribution transfer [22.14684430074648]
The presence of adversarial examples poses a significant threat to deep learning models and their applications.
Existing defense methods provide certain resilience against adversarial examples, but often suffer from decreased accuracy and generalization performance.
This paper proposes a defense method based on distribution shift, leveraging the distribution transfer capability of a diffusion model for adversarial defense.
arXiv Detail & Related papers (2023-11-23T08:01:18Z) - Improving Adversarial Robustness to Sensitivity and Invariance Attacks
with Deep Metric Learning [80.21709045433096]
A standard method in adversarial robustness assumes a framework to defend against samples crafted by minimally perturbing a sample.
We use metric learning to frame adversarial regularization as an optimal transport problem.
Our preliminary results indicate that regularizing over invariant perturbations in our framework improves both invariant and sensitivity defense.
arXiv Detail & Related papers (2022-11-04T13:54:02Z) - Feature Importance-aware Transferable Adversarial Attacks [46.12026564065764]
Existing transferable attacks tend to craft adversarial examples by indiscriminately distorting features.
We argue that such brute-force degradation would introduce model-specific local optimum into adversarial examples.
By contrast, we propose the Feature Importance-aware Attack (FIA), which disrupts important object-aware features.
arXiv Detail & Related papers (2021-07-29T17:13:29Z) - Adversarial Robustness through the Lens of Causality [105.51753064807014]
adversarial vulnerability of deep neural networks has attracted significant attention in machine learning.
We propose to incorporate causality into mitigating adversarial vulnerability.
Our method can be seen as the first attempt to leverage causality for mitigating adversarial vulnerability.
arXiv Detail & Related papers (2021-06-11T06:55:02Z) - Fundamental Tradeoffs between Invariance and Sensitivity to Adversarial
Perturbations [65.05561023880351]
Adversarial examples are malicious inputs crafted to induce misclassification.
This paper studies a complementary failure mode, invariance-based adversarial examples.
We show that defenses against sensitivity-based attacks actively harm a model's accuracy on invariance-based attacks.
arXiv Detail & Related papers (2020-02-11T18:50:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.