Learning Instance-Specific Augmentations by Capturing Local Invariances
- URL: http://arxiv.org/abs/2206.00051v3
- Date: Tue, 30 May 2023 15:25:51 GMT
- Title: Learning Instance-Specific Augmentations by Capturing Local Invariances
- Authors: Ning Miao, Tom Rainforth, Emile Mathieu, Yann Dubois, Yee Whye Teh,
Adam Foster, Hyunjik Kim
- Abstract summary: InstaAug is a method for automatically learning input-specific augmentations from data.
We empirically demonstrate that InstaAug learns meaningful input-dependent augmentations for a wide range of transformation classes.
- Score: 62.70897571389785
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce InstaAug, a method for automatically learning input-specific
augmentations from data. Previous methods for learning augmentations have
typically assumed independence between the original input and the
transformation applied to that input. This can be highly restrictive, as the
invariances we hope our augmentation will capture are themselves often highly
input dependent. InstaAug instead introduces a learnable invariance module that
maps from inputs to tailored transformation parameters, allowing local
invariances to be captured. This can be simultaneously trained alongside the
downstream model in a fully end-to-end manner, or separately learned for a
pre-trained model. We empirically demonstrate that InstaAug learns meaningful
input-dependent augmentations for a wide range of transformation classes, which
in turn provides better performance on both supervised and self-supervised
tasks.
Related papers
- Learning to Transform for Generalizable Instance-wise Invariance [48.647925994707855]
Given any image, we use a normalizing flow to predict a distribution over transformations and average the predictions over them.
This normalizing flow is trained end-to-end and can learn a much larger range of transformations than Augerino and InstaAug.
When used as data augmentation, our method shows accuracy and robustness gains on CIFAR 10, CIFAR10-LT, and TinyImageNet.
arXiv Detail & Related papers (2023-09-28T17:59:58Z) - Amortised Invariance Learning for Contrastive Self-Supervision [11.042648980854485]
We introduce the notion of amortised invariance learning for contrastive self supervision.
We show that our amortised features provide a reliable way to learn diverse downstream tasks with different invariance requirements.
This provides an exciting perspective that opens up new horizons in the field of general purpose representation learning.
arXiv Detail & Related papers (2023-02-24T16:15:11Z) - EquiMod: An Equivariance Module to Improve Self-Supervised Learning [77.34726150561087]
Self-supervised visual representation methods are closing the gap with supervised learning performance.
These methods rely on maximizing the similarity between embeddings of related synthetic inputs created through data augmentations.
We introduce EquiMod a generic equivariance module that structures the learned latent space.
arXiv Detail & Related papers (2022-11-02T16:25:54Z) - Regularising for invariance to data augmentation improves supervised
learning [82.85692486314949]
We show that using multiple augmentations per input can improve generalisation.
We propose an explicit regulariser that encourages this invariance on the level of individual model predictions.
arXiv Detail & Related papers (2022-03-07T11:25:45Z) - Deep invariant networks with differentiable augmentation layers [87.22033101185201]
Methods for learning data augmentation policies require held-out data and are based on bilevel optimization problems.
We show that our approach is easier and faster to train than modern automatic data augmentation techniques.
arXiv Detail & Related papers (2022-02-04T14:12:31Z) - Improving Transferability of Representations via Augmentation-Aware
Self-Supervision [117.15012005163322]
AugSelf is an auxiliary self-supervised loss that learns the difference of augmentation parameters between two randomly augmented samples.
Our intuition is that AugSelf encourages to preserve augmentation-aware information in learned representations, which could be beneficial for their transferability.
AugSelf can easily be incorporated into recent state-of-the-art representation learning methods with a negligible additional training cost.
arXiv Detail & Related papers (2021-11-18T10:43:50Z) - Rotating spiders and reflecting dogs: a class conditional approach to
learning data augmentation distributions [0.0]
We introduce a method by which we can learn class conditional distributions on augmentation transformations.
We give a number of examples where our methods learn different non-meaningful transformations depending on class.
Our method can be used as a tool to probe the symmetries intrinsic to a potentially complex dataset.
arXiv Detail & Related papers (2021-06-07T23:36:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.