Rotating spiders and reflecting dogs: a class conditional approach to
learning data augmentation distributions
- URL: http://arxiv.org/abs/2106.04009v1
- Date: Mon, 7 Jun 2021 23:36:24 GMT
- Title: Rotating spiders and reflecting dogs: a class conditional approach to
learning data augmentation distributions
- Authors: Scott Mahan, Henry Kvinge, Tim Doster
- Abstract summary: We introduce a method by which we can learn class conditional distributions on augmentation transformations.
We give a number of examples where our methods learn different non-meaningful transformations depending on class.
Our method can be used as a tool to probe the symmetries intrinsic to a potentially complex dataset.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Building invariance to non-meaningful transformations is essential to
building efficient and generalizable machine learning models. In practice, the
most common way to learn invariance is through data augmentation. There has
been recent interest in the development of methods that learn distributions on
augmentation transformations from the training data itself. While such
approaches are beneficial since they are responsive to the data, they ignore
the fact that in many situations the range of transformations to which a model
needs to be invariant changes depending on the particular class input belongs
to. For example, if a model needs to be able to predict whether an image
contains a starfish or a dog, we may want to apply random rotations to starfish
images during training (since these do not have a preferred orientation), but
we would not want to do this to images of dogs. In this work we introduce a
method by which we can learn class conditional distributions on augmentation
transformations. We give a number of examples where our methods learn different
non-meaningful transformations depending on class and further show how our
method can be used as a tool to probe the symmetries intrinsic to a potentially
complex dataset.
Related papers
- Learning to Transform for Generalizable Instance-wise Invariance [48.647925994707855]
Given any image, we use a normalizing flow to predict a distribution over transformations and average the predictions over them.
This normalizing flow is trained end-to-end and can learn a much larger range of transformations than Augerino and InstaAug.
When used as data augmentation, our method shows accuracy and robustness gains on CIFAR 10, CIFAR10-LT, and TinyImageNet.
arXiv Detail & Related papers (2023-09-28T17:59:58Z) - Effective Data Augmentation With Diffusion Models [65.09758931804478]
We address the lack of diversity in data augmentation with image-to-image transformations parameterized by pre-trained text-to-image diffusion models.
Our method edits images to change their semantics using an off-the-shelf diffusion model, and generalizes to novel visual concepts from a few labelled examples.
We evaluate our approach on few-shot image classification tasks, and on a real-world weed recognition task, and observe an improvement in accuracy in tested domains.
arXiv Detail & Related papers (2023-02-07T20:42:28Z) - EquiMod: An Equivariance Module to Improve Self-Supervised Learning [77.34726150561087]
Self-supervised visual representation methods are closing the gap with supervised learning performance.
These methods rely on maximizing the similarity between embeddings of related synthetic inputs created through data augmentations.
We introduce EquiMod a generic equivariance module that structures the learned latent space.
arXiv Detail & Related papers (2022-11-02T16:25:54Z) - Learning Instance-Specific Augmentations by Capturing Local Invariances [62.70897571389785]
InstaAug is a method for automatically learning input-specific augmentations from data.
We empirically demonstrate that InstaAug learns meaningful input-dependent augmentations for a wide range of transformation classes.
arXiv Detail & Related papers (2022-05-31T18:38:06Z) - Do Deep Networks Transfer Invariances Across Classes? [123.84237389985236]
We show how a generative approach for learning the nuisance transformations can help transfer invariances across classes.
Our results provide one explanation for why classifiers generalize poorly on unbalanced and longtailed distributions.
arXiv Detail & Related papers (2022-03-18T04:38:18Z) - TransformNet: Self-supervised representation learning through predicting
geometric transformations [0.8098097078441623]
We describe the unsupervised semantic feature learning approach for recognition of the geometric transformation applied to the input data.
The basic concept of our approach is that if someone is unaware of the objects in the images, he/she would not be able to quantitatively predict the geometric transformation that was applied to them.
arXiv Detail & Related papers (2022-02-08T22:41:01Z) - DNA: Dynamic Network Augmentation [0.0]
We introduce Dynamic Network Augmentation (DNA), which learns input-conditional augmentation policies.
Our model allows for dynamic augmentation policies and performs well on data with geometric transformations conditional on input features.
arXiv Detail & Related papers (2021-12-17T01:43:56Z) - Training or Architecture? How to Incorporate Invariance in Neural
Networks [14.162739081163444]
We propose a method for provably invariant network architectures with respect to group actions.
In a nutshell, we intend to 'undo' any possible transformation before feeding the data into the actual network.
We analyze properties of such approaches, extend them to equivariant networks, and demonstrate their advantages in terms of robustness as well as computational efficiency in several numerical examples.
arXiv Detail & Related papers (2021-06-18T10:31:00Z) - Category-Learning with Context-Augmented Autoencoder [63.05016513788047]
Finding an interpretable non-redundant representation of real-world data is one of the key problems in Machine Learning.
We propose a novel method of using data augmentations when training autoencoders.
We train a Variational Autoencoder in such a way, that it makes transformation outcome predictable by auxiliary network.
arXiv Detail & Related papers (2020-10-10T14:04:44Z) - Meta-Learning Symmetries by Reparameterization [63.85144439337671]
We present a method for learning and encoding equivariances into networks by learning corresponding parameter sharing patterns from data.
Our experiments suggest that it can automatically learn to encode equivariances to common transformations used in image processing tasks.
arXiv Detail & Related papers (2020-07-06T17:59:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.