Optimizing transformations for contrastive learning in a differentiable
framework
- URL: http://arxiv.org/abs/2207.13367v1
- Date: Wed, 27 Jul 2022 08:47:57 GMT
- Title: Optimizing transformations for contrastive learning in a differentiable
framework
- Authors: Camille Ruppli, Pietro Gori, Roberto Ardon, Isabelle Bloch
- Abstract summary: We propose a framework to find optimal transformations for contrastive learning using a differentiable transformation network.
Our method increases performances at low annotated data regime both in supervision accuracy and in convergence speed.
Experiments were performed on 34000 2D slices of brain Magnetic Resonance Images and 11200 chest X-ray images.
- Score: 4.828899860513713
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Current contrastive learning methods use random transformations sampled from
a large list of transformations, with fixed hyperparameters, to learn
invariance from an unannotated database. Following previous works that
introduce a small amount of supervision, we propose a framework to find optimal
transformations for contrastive learning using a differentiable transformation
network. Our method increases performances at low annotated data regime both in
supervision accuracy and in convergence speed. In contrast to previous work, no
generative model is needed for transformation optimization. Transformed images
keep relevant information to solve the supervised task, here classification.
Experiments were performed on 34000 2D slices of brain Magnetic Resonance
Images and 11200 chest X-ray images. On both datasets, with 10% of labeled
data, our model achieves better performances than a fully supervised model with
100% labels.
Related papers
- Automatic Data Augmentation Learning using Bilevel Optimization for
Histopathological Images [12.166446006133228]
Data Augmentation (DA) can be used during training to generate additional samples by applying transformations to existing ones.
DA is not only dataset-specific but it also requires domain knowledge.
We propose an automatic DA learning method to improve the model training.
arXiv Detail & Related papers (2023-07-21T17:22:22Z) - Effective Data Augmentation With Diffusion Models [65.09758931804478]
We address the lack of diversity in data augmentation with image-to-image transformations parameterized by pre-trained text-to-image diffusion models.
Our method edits images to change their semantics using an off-the-shelf diffusion model, and generalizes to novel visual concepts from a few labelled examples.
We evaluate our approach on few-shot image classification tasks, and on a real-world weed recognition task, and observe an improvement in accuracy in tested domains.
arXiv Detail & Related papers (2023-02-07T20:42:28Z) - TransformNet: Self-supervised representation learning through predicting
geometric transformations [0.8098097078441623]
We describe the unsupervised semantic feature learning approach for recognition of the geometric transformation applied to the input data.
The basic concept of our approach is that if someone is unaware of the objects in the images, he/she would not be able to quantitatively predict the geometric transformation that was applied to them.
arXiv Detail & Related papers (2022-02-08T22:41:01Z) - Adaptive Image Transformations for Transfer-based Adversarial Attack [73.74904401540743]
We propose a novel architecture, called Adaptive Image Transformation Learner (AITL)
Our elaborately designed learner adaptively selects the most effective combination of image transformations specific to the input image.
Our method significantly improves the attack success rates on both normally trained models and defense models under various settings.
arXiv Detail & Related papers (2021-11-27T08:15:44Z) - Efficient Vision Transformers via Fine-Grained Manifold Distillation [96.50513363752836]
Vision transformer architectures have shown extraordinary performance on many computer vision tasks.
Although the network performance is boosted, transformers are often required more computational resources.
We propose to excavate useful information from the teacher transformer through the relationship between images and the divided patches.
arXiv Detail & Related papers (2021-07-03T08:28:34Z) - A Hierarchical Transformation-Discriminating Generative Model for Few
Shot Anomaly Detection [93.38607559281601]
We devise a hierarchical generative model that captures the multi-scale patch distribution of each training image.
The anomaly score is obtained by aggregating the patch-based votes of the correct transformation across scales and image regions.
arXiv Detail & Related papers (2021-04-29T17:49:48Z) - Learning Representational Invariances for Data-Efficient Action
Recognition [52.23716087656834]
We show that our data augmentation strategy leads to promising performance on the Kinetics-100, UCF-101, and HMDB-51 datasets.
We also validate our data augmentation strategy in the fully supervised setting and demonstrate improved performance.
arXiv Detail & Related papers (2021-03-30T17:59:49Z) - Transformation Consistency Regularization- A Semi-Supervised Paradigm
for Image-to-Image Translation [18.870983535180457]
We propose Transformation Consistency Regularization, which delves into a more challenging setting of image-to-image translation.
We evaluate the efficacy of our algorithm on three different applications: image colorization, denoising and super-resolution.
Our method is significantly data efficient, requiring only around 10 - 20% of labeled samples to achieve similar image reconstructions to its fully-supervised counterpart.
arXiv Detail & Related papers (2020-07-15T17:41:35Z) - Unsupervised Learning of Visual Features by Contrasting Cluster
Assignments [57.33699905852397]
We propose an online algorithm, SwAV, that takes advantage of contrastive methods without requiring to compute pairwise comparisons.
Our method simultaneously clusters the data while enforcing consistency between cluster assignments.
Our method can be trained with large and small batches and can scale to unlimited amounts of data.
arXiv Detail & Related papers (2020-06-17T14:00:42Z) - On the Generalization Effects of Linear Transformations in Data
Augmentation [32.01435459892255]
Data augmentation is a powerful technique to improve performance in applications such as image and text classification tasks.
We study a family of linear transformations and study their effects on the ridge estimator in an over-parametrized linear regression setting.
We propose an augmentation scheme that searches over the space of transformations by how uncertain the model is about the transformed data.
arXiv Detail & Related papers (2020-05-02T04:10:21Z) - On Box-Cox Transformation for Image Normality and Pattern Classification [0.6548580592686074]
This paper revolves around the utility of such a tool as a pre-processing step to transform two-dimensional data.
We compare the effect of this light-weight Box-Cox transformation with well-established state-of-the-art low light image enhancement techniques.
We also demonstrate the effectiveness of our approach through several test-bed data sets for generic improvement of visual appearance of images.
arXiv Detail & Related papers (2020-04-15T17:10:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.