DNA: Dynamic Network Augmentation
- URL: http://arxiv.org/abs/2112.09277v1
- Date: Fri, 17 Dec 2021 01:43:56 GMT
- Title: DNA: Dynamic Network Augmentation
- Authors: Scott Mahan, Tim Doster, Henry Kvinge
- Abstract summary: We introduce Dynamic Network Augmentation (DNA), which learns input-conditional augmentation policies.
Our model allows for dynamic augmentation policies and performs well on data with geometric transformations conditional on input features.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In many classification problems, we want a classifier that is robust to a
range of non-semantic transformations. For example, a human can identify a dog
in a picture regardless of the orientation and pose in which it appears. There
is substantial evidence that this kind of invariance can significantly improve
the accuracy and generalization of machine learning models. A common technique
to teach a model geometric invariances is to augment training data with
transformed inputs. However, which invariances are desired for a given
classification task is not always known. Determining an effective data
augmentation policy can require domain expertise or extensive data
pre-processing. Recent efforts like AutoAugment optimize over a parameterized
search space of data augmentation policies to automate the augmentation
process. While AutoAugment and similar methods achieve state-of-the-art
classification accuracy on several common datasets, they are limited to
learning one data augmentation policy. Often times different classes or
features call for different geometric invariances. We introduce Dynamic Network
Augmentation (DNA), which learns input-conditional augmentation policies.
Augmentation parameters in our model are outputs of a neural network and are
implicitly learned as the network weights are updated. Our model allows for
dynamic augmentation policies and performs well on data with geometric
transformations conditional on input features.
Related papers
- Genetic Learning for Designing Sim-to-Real Data Augmentations [1.03590082373586]
Data augmentations are useful in closing the sim-to-real domain gap when training on synthetic data.
Many image augmentation techniques exist, parametrized by different settings, such as strength and probability.
This paper presents two different interpretable metrics that can be combined to predict how well a certain augmentation policy will work for a specific sim-to-real setting.
arXiv Detail & Related papers (2024-03-11T15:00:56Z) - Adversarial Auto-Augment with Label Preservation: A Representation
Learning Principle Guided Approach [95.74102207187545]
We show that a prior-free autonomous data augmentation's objective can be derived from a representation learning principle.
We then propose a practical surrogate to the objective that can be efficiently optimized and integrated seamlessly into existing methods.
arXiv Detail & Related papers (2022-11-02T02:02:51Z) - Learning Instance-Specific Augmentations by Capturing Local Invariances [62.70897571389785]
InstaAug is a method for automatically learning input-specific augmentations from data.
We empirically demonstrate that InstaAug learns meaningful input-dependent augmentations for a wide range of transformation classes.
arXiv Detail & Related papers (2022-05-31T18:38:06Z) - Invariance Learning in Deep Neural Networks with Differentiable Laplace
Approximations [76.82124752950148]
We develop a convenient gradient-based method for selecting the data augmentation.
We use a differentiable Kronecker-factored Laplace approximation to the marginal likelihood as our objective.
arXiv Detail & Related papers (2022-02-22T02:51:11Z) - Deep invariant networks with differentiable augmentation layers [87.22033101185201]
Methods for learning data augmentation policies require held-out data and are based on bilevel optimization problems.
We show that our approach is easier and faster to train than modern automatic data augmentation techniques.
arXiv Detail & Related papers (2022-02-04T14:12:31Z) - CADDA: Class-wise Automatic Differentiable Data Augmentation for EEG
Signals [92.60744099084157]
We propose differentiable data augmentation amenable to gradient-based learning.
We demonstrate the relevance of our approach on the clinically relevant sleep staging classification task.
arXiv Detail & Related papers (2021-06-25T15:28:48Z) - Rotating spiders and reflecting dogs: a class conditional approach to
learning data augmentation distributions [0.0]
We introduce a method by which we can learn class conditional distributions on augmentation transformations.
We give a number of examples where our methods learn different non-meaningful transformations depending on class.
Our method can be used as a tool to probe the symmetries intrinsic to a potentially complex dataset.
arXiv Detail & Related papers (2021-06-07T23:36:24Z) - Category-Learning with Context-Augmented Autoencoder [63.05016513788047]
Finding an interpretable non-redundant representation of real-world data is one of the key problems in Machine Learning.
We propose a novel method of using data augmentations when training autoencoders.
We train a Variational Autoencoder in such a way, that it makes transformation outcome predictable by auxiliary network.
arXiv Detail & Related papers (2020-10-10T14:04:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.