Data Augmentation via Structured Adversarial Perturbations
- URL: http://arxiv.org/abs/2011.03010v1
- Date: Thu, 5 Nov 2020 18:07:55 GMT
- Title: Data Augmentation via Structured Adversarial Perturbations
- Authors: Calvin Luo, Hossein Mobahi, Samy Bengio
- Abstract summary: We propose a method to generate adversarial examples that maintain some desired natural structure.
We demonstrate this approach through two types of image transformations: photometric and geometric.
- Score: 25.31035665982414
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Data augmentation is a major component of many machine learning methods with
state-of-the-art performance. Common augmentation strategies work by drawing
random samples from a space of transformations. Unfortunately, such sampling
approaches are limited in expressivity, as they are unable to scale to rich
transformations that depend on numerous parameters due to the curse of
dimensionality. Adversarial examples can be considered as an alternative scheme
for data augmentation. By being trained on the most difficult modifications of
the inputs, the resulting models are then hopefully able to handle other,
presumably easier, modifications as well. The advantage of adversarial
augmentation is that it replaces sampling with the use of a single, calculated
perturbation that maximally increases the loss. The downside, however, is that
these raw adversarial perturbations appear rather unstructured; applying them
often does not produce a natural transformation, contrary to a desirable data
augmentation technique. To address this, we propose a method to generate
adversarial examples that maintain some desired natural structure. We first
construct a subspace that only contains perturbations with the desired
structure. We then project the raw adversarial gradient onto this space to
select a structured transformation that would maximally increase the loss when
applied. We demonstrate this approach through two types of image
transformations: photometric and geometric. Furthermore, we show that training
on such structured adversarial images improves generalization.
Related papers
- Enhancing 3D Transformer Segmentation Model for Medical Image with Token-level Representation Learning [9.896550384001348]
This work proposes a token-level representation learning loss that maximizes agreement between token embeddings from different augmented views individually.
We also invent a simple "rotate-and-restore" mechanism, which rotates and flips one augmented view of input volume, and later restores the order of tokens in the feature maps.
We test our pre-training scheme on two public medical segmentation datasets, and the results on the downstream segmentation task show more improvement of our methods than other state-of-the-art pre-trainig methods.
arXiv Detail & Related papers (2024-08-12T01:49:13Z) - Unsupervised Learning of Invariance Transformations [105.54048699217668]
We develop an algorithmic framework for finding approximate graph automorphisms.
We discuss how this framework can be used to find approximate automorphisms in weighted graphs in general.
arXiv Detail & Related papers (2023-07-24T17:03:28Z) - Adversarial and Random Transformations for Robust Domain Adaptation and
Generalization [9.995765847080596]
We show that by simply applying consistency training with random data augmentation, state-of-the-art results on domain adaptation (DA) and generalization (DG) can be obtained.
The combined adversarial and random transformations based method outperforms the state-of-the-art on multiple DA and DG benchmark datasets.
arXiv Detail & Related papers (2022-11-13T02:10:13Z) - Robust Universal Adversarial Perturbations [2.825323579996619]
We introduce and formulate UAPs robust against real-world transformations.
Our results show that our method can generate UAPs up to 23% more robust than state-of-the-art baselines.
arXiv Detail & Related papers (2022-06-22T06:05:30Z) - Feature transforms for image data augmentation [74.12025519234153]
In image classification, many augmentation approaches utilize simple image manipulation algorithms.
In this work, we build ensembles on the data level by adding images generated by combining fourteen augmentation approaches.
Pretrained ResNet50 networks are finetuned on training sets that include images derived from each augmentation method.
arXiv Detail & Related papers (2022-01-24T14:12:29Z) - Revisiting Transformation Invariant Geometric Deep Learning: Are Initial
Representations All You Need? [80.86819657126041]
We show that transformation-invariant and distance-preserving initial representations are sufficient to achieve transformation invariance.
Specifically, we realize transformation-invariant and distance-preserving initial point representations by modifying multi-dimensional scaling.
We prove that TinvNN can strictly guarantee transformation invariance, being general and flexible enough to be combined with the existing neural networks.
arXiv Detail & Related papers (2021-12-23T03:52:33Z) - Improving Robustness of Adversarial Attacks Using an Affine-Invariant
Gradient Estimator [15.863109283735625]
Adversarial examples can deceive a deep neural network (DNN) by significantly altering its response with imperceptible perturbations.
Most of the existing adversarial examples cannot maintain the malicious functionality if we apply an affine transformation on the resultant examples.
We propose an affine-invariant adversarial attack which can consistently construct adversarial examples robust over a distribution of affine transformation.
arXiv Detail & Related papers (2021-09-13T09:43:17Z) - CADDA: Class-wise Automatic Differentiable Data Augmentation for EEG
Signals [92.60744099084157]
We propose differentiable data augmentation amenable to gradient-based learning.
We demonstrate the relevance of our approach on the clinically relevant sleep staging classification task.
arXiv Detail & Related papers (2021-06-25T15:28:48Z) - Adversarial Semantic Data Augmentation for Human Pose Estimation [96.75411357541438]
We propose Semantic Data Augmentation (SDA), a method that augments images by pasting segmented body parts with various semantic granularity.
We also propose Adversarial Semantic Data Augmentation (ASDA), which exploits a generative network to dynamiclly predict tailored pasting configuration.
State-of-the-art results are achieved on challenging benchmarks.
arXiv Detail & Related papers (2020-08-03T07:56:04Z) - On Compositions of Transformations in Contrastive Self-Supervised
Learning [66.15514035861048]
In this paper, we generalize contrastive learning to a wider set of transformations.
We find that being invariant to certain transformations and distinctive to others is critical to learning effective video representations.
arXiv Detail & Related papers (2020-03-09T17:56:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.