Adversarial Robustness with Non-uniform Perturbations
- URL: http://arxiv.org/abs/2102.12002v1
- Date: Wed, 24 Feb 2021 00:54:43 GMT
- Title: Adversarial Robustness with Non-uniform Perturbations
- Authors: Ecenaz Erdemir, Jeffrey Bickford, Luca Melis and Sergul Aydore
- Abstract summary: Prior work mainly focus on crafting adversarial examples with small uniform norm-bounded perturbations across features to maintain the requirement of imperceptibility.
Our approach can be adapted to other domains where non-uniform perturbations more accurately represent realistic adversarial examples.
- Score: 3.804240190982695
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Robustness of machine learning models is critical for security related
applications, where real-world adversaries are uniquely focused on evading
neural network based detectors. Prior work mainly focus on crafting adversarial
examples with small uniform norm-bounded perturbations across features to
maintain the requirement of imperceptibility. Although such approaches are
valid for images, uniform perturbations do not result in realistic adversarial
examples in domains such as malware, finance, and social networks. For these
types of applications, features typically have some semantically meaningful
dependencies. The key idea of our proposed approach is to enable non-uniform
perturbations that can adequately represent these feature dependencies during
adversarial training. We propose using characteristics of the empirical data
distribution, both on correlations between the features and the importance of
the features themselves. Using experimental datasets for malware
classification, credit risk prediction, and spam detection, we show that our
approach is more robust to real-world attacks. Our approach can be adapted to
other domains where non-uniform perturbations more accurately represent
realistic adversarial examples.
Related papers
- An Attention-based Framework for Fair Contrastive Learning [2.1605931466490795]
We propose a new method for fair contrastive learning that employs an attention mechanism to model bias-causing interactions.
Our attention mechanism avoids bias-causing samples that confound the model and focuses on bias-reducing samples that help learn semantically meaningful representations.
arXiv Detail & Related papers (2024-11-22T07:11:35Z) - Enhancing Adversarial Robustness via Uncertainty-Aware Distributional Adversarial Training [43.766504246864045]
We propose a novel uncertainty-aware distributional adversarial training method.
Our approach achieves state-of-the-art adversarial robustness and maintains natural performance.
arXiv Detail & Related papers (2024-11-05T07:26:24Z) - Towards Improving Robustness Against Common Corruptions in Object
Detectors Using Adversarial Contrastive Learning [10.27974860479791]
This paper proposes an innovative adversarial contrastive learning framework to enhance neural network robustness simultaneously against adversarial attacks and common corruptions.
By focusing on improving performance under adversarial and real-world conditions, our approach aims to bolster the robustness of neural networks in safety-critical applications.
arXiv Detail & Related papers (2023-11-14T06:13:52Z) - Exploring Robust Features for Improving Adversarial Robustness [11.935612873688122]
We explore the robust features which are not affected by the adversarial perturbations to improve the model's adversarial robustness.
Specifically, we propose a feature disentanglement model to segregate the robust features from non-robust features and domain specific features.
The trained domain discriminator is able to identify the domain specific features from the clean images and adversarial examples almost perfectly.
arXiv Detail & Related papers (2023-09-09T00:30:04Z) - How adversarial attacks can disrupt seemingly stable accurate classifiers [76.95145661711514]
Adversarial attacks dramatically change the output of an otherwise accurate learning system using a seemingly inconsequential modification to a piece of input data.
Here, we show that this may be seen as a fundamental feature of classifiers working with high dimensional input data.
We introduce a simple generic and generalisable framework for which key behaviours observed in practical systems arise with high probability.
arXiv Detail & Related papers (2023-09-07T12:02:00Z) - Improving Adversarial Robustness to Sensitivity and Invariance Attacks
with Deep Metric Learning [80.21709045433096]
A standard method in adversarial robustness assumes a framework to defend against samples crafted by minimally perturbing a sample.
We use metric learning to frame adversarial regularization as an optimal transport problem.
Our preliminary results indicate that regularizing over invariant perturbations in our framework improves both invariant and sensitivity defense.
arXiv Detail & Related papers (2022-11-04T13:54:02Z) - Robust Transferable Feature Extractors: Learning to Defend Pre-Trained
Networks Against White Box Adversaries [69.53730499849023]
We show that adversarial examples can be successfully transferred to another independently trained model to induce prediction errors.
We propose a deep learning-based pre-processing mechanism, which we refer to as a robust transferable feature extractor (RTFE)
arXiv Detail & Related papers (2022-09-14T21:09:34Z) - Decorrelative Network Architecture for Robust Electrocardiogram
Classification [4.808817930937323]
It is not possible to train networks that are accurate in all scenarios.
Deep learning methods sample the model parameter space to estimate uncertainty.
These parameters are often subject to the same vulnerabilities, which can be exploited by adversarial attacks.
We propose a novel ensemble approach based on feature decorrelation and Fourier partitioning for teaching networks diverse complementary features.
arXiv Detail & Related papers (2022-07-19T02:36:36Z) - Explainable Adversarial Attacks in Deep Neural Networks Using Activation
Profiles [69.9674326582747]
This paper presents a visual framework to investigate neural network models subjected to adversarial examples.
We show how observing these elements can quickly pinpoint exploited areas in a model.
arXiv Detail & Related papers (2021-03-18T13:04:21Z) - Learning to Separate Clusters of Adversarial Representations for Robust
Adversarial Detection [50.03939695025513]
We propose a new probabilistic adversarial detector motivated by a recently introduced non-robust feature.
In this paper, we consider the non-robust features as a common property of adversarial examples, and we deduce it is possible to find a cluster in representation space corresponding to the property.
This idea leads us to probability estimate distribution of adversarial representations in a separate cluster, and leverage the distribution for a likelihood based adversarial detector.
arXiv Detail & Related papers (2020-12-07T07:21:18Z) - Attribute-Guided Adversarial Training for Robustness to Natural
Perturbations [64.35805267250682]
We propose an adversarial training approach which learns to generate new samples so as to maximize exposure of the classifier to the attributes-space.
Our approach enables deep neural networks to be robust against a wide range of naturally occurring perturbations.
arXiv Detail & Related papers (2020-12-03T10:17:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.