FACL-Attack: Frequency-Aware Contrastive Learning for Transferable Adversarial Attacks
- URL: http://arxiv.org/abs/2407.20653v1
- Date: Tue, 30 Jul 2024 08:50:06 GMT
- Title: FACL-Attack: Frequency-Aware Contrastive Learning for Transferable Adversarial Attacks
- Authors: Hunmin Yang, Jongoh Jeong, Kuk-Jin Yoon,
- Abstract summary: Deep neural networks are known to be vulnerable to security risks due to the inherent transferable nature of adversarial examples.
We propose a feature contrastive approach in the frequency domain to generate adversarial examples that are robust in both cross-domain and cross-model settings.
We demonstrate strong transferability of our generated adversarial perturbations through extensive cross-domain and cross-model experiments.
- Score: 42.18755809782401
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks are known to be vulnerable to security risks due to the inherent transferable nature of adversarial examples. Despite the success of recent generative model-based attacks demonstrating strong transferability, it still remains a challenge to design an efficient attack strategy in a real-world strict black-box setting, where both the target domain and model architectures are unknown. In this paper, we seek to explore a feature contrastive approach in the frequency domain to generate adversarial examples that are robust in both cross-domain and cross-model settings. With that goal in mind, we propose two modules that are only employed during the training phase: a Frequency-Aware Domain Randomization (FADR) module to randomize domain-variant low- and high-range frequency components and a Frequency-Augmented Contrastive Learning (FACL) module to effectively separate domain-invariant mid-frequency features of clean and perturbed image. We demonstrate strong transferability of our generated adversarial perturbations through extensive cross-domain and cross-model experiments, while keeping the inference time complexity.
Related papers
- StyDeSty: Min-Max Stylization and Destylization for Single Domain Generalization [85.18995948334592]
Single domain generalization (single DG) aims at learning a robust model generalizable to unseen domains from only one training domain.
State-of-the-art approaches have mostly relied on data augmentations, such as adversarial perturbation and style enhancement, to synthesize new data.
We propose emphStyDeSty, which explicitly accounts for the alignment of the source and pseudo domains in the process of data augmentation.
arXiv Detail & Related papers (2024-06-01T02:41:34Z) - Towards Transferable Adversarial Attacks with Centralized Perturbation [4.689122927344728]
Adversa transferability enables black-box attacks on unknown victim deep neural networks (DNNs)
Current transferable attacks create adversarial perturbation over the entire image, resulting in excessive noise that overfit the source model.
We propose a transferable adversarial attack with fine-grained perturbation optimization in the frequency domain, creating centralized perturbation.
arXiv Detail & Related papers (2023-12-11T08:25:50Z) - FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning
Attacks in Federated Learning [98.43475653490219]
Federated learning (FL) is susceptible to poisoning attacks.
FreqFed is a novel aggregation mechanism that transforms the model updates into the frequency domain.
We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.
arXiv Detail & Related papers (2023-12-07T16:56:24Z) - Frequency Domain Adversarial Training for Robust Volumetric Medical
Segmentation [111.61781272232646]
It is imperative to ensure the robustness of deep learning models in critical applications such as, healthcare.
We present a 3D frequency domain adversarial attack for volumetric medical image segmentation models.
arXiv Detail & Related papers (2023-07-14T10:50:43Z) - Randomized Adversarial Style Perturbations for Domain Generalization [49.888364462991234]
We propose a novel domain generalization technique, referred to as Randomized Adversarial Style Perturbation (RASP)
The proposed algorithm perturbs the style of a feature in an adversarial direction towards a randomly selected class, and makes the model learn against being misled by the unexpected styles observed in unseen target domains.
We evaluate the proposed algorithm via extensive experiments on various benchmarks and show that our approach improves domain generalization performance, especially in large-scale benchmarks.
arXiv Detail & Related papers (2023-04-04T17:07:06Z) - Frequency Domain Model Augmentation for Adversarial Attack [91.36850162147678]
For black-box attacks, the gap between the substitute model and the victim model is usually large.
We propose a novel spectrum simulation attack to craft more transferable adversarial examples against both normally trained and defense models.
arXiv Detail & Related papers (2022-07-12T08:26:21Z) - From a Fourier-Domain Perspective on Adversarial Examples to a Wiener
Filter Defense for Semantic Segmentation [27.04820989579924]
deep neural networks are not robust against adversarial perturbations.
In this work, we study the adversarial problem from a frequency domain perspective.
We propose an adversarial defense method based on the well-known Wiener filters.
arXiv Detail & Related papers (2020-12-02T22:06:04Z) - Frequency-based Automated Modulation Classification in the Presence of
Adversaries [17.930854969511046]
We present a novel receiver architecture consisting of deep learning models capable of withstanding transferable adversarial interference.
In this work, we demonstrate classification performance improvements greater than 30% on recurrent neural networks (RNNs) and greater than 50% on convolutional neural networks (CNNs)
arXiv Detail & Related papers (2020-11-02T17:12:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.