TLDR: Twin Learning for Dimensionality Reduction
- URL: http://arxiv.org/abs/2110.09455v1
- Date: Mon, 18 Oct 2021 16:46:12 GMT
- Title: TLDR: Twin Learning for Dimensionality Reduction
- Authors: Yannis Kalantidis, Carlos Lassance, Jon Almazan, Diane Larlus
- Abstract summary: Dimensionality reduction methods learn low-dimensional spaces where some properties of the initial space, typically the notion of "neighborhood", are preserved.
We propose a dimensionality reduction method for generic input spaces that is porting the simple self-supervised learning framework of Barlow Twins to a setting where it is hard or impossible to define an appropriate set of distortions by hand.
- Score: 25.373435473381356
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Dimensionality reduction methods are unsupervised approaches which learn
low-dimensional spaces where some properties of the initial space, typically
the notion of "neighborhood", are preserved. They are a crucial component of
diverse tasks like visualization, compression, indexing, and retrieval. Aiming
for a totally different goal, self-supervised visual representation learning
has been shown to produce transferable representation functions by learning
models that encode invariance to artificially created distortions, e.g. a set
of hand-crafted image transformations. Unlike manifold learning methods that
usually require propagation on large k-NN graphs or complicated optimization
solvers, self-supervised learning approaches rely on simpler and more scalable
frameworks for learning. In this paper, we unify these two families of
approaches from the angle of manifold learning and propose TLDR, a
dimensionality reduction method for generic input spaces that is porting the
simple self-supervised learning framework of Barlow Twins to a setting where it
is hard or impossible to define an appropriate set of distortions by hand. We
propose to use nearest neighbors to build pairs from a training set and a
redundancy reduction loss borrowed from the self-supervised literature to learn
an encoder that produces representations invariant across such pairs. TLDR is a
method that is simple, easy to implement and train, and of broad applicability;
it consists of an offline nearest neighbor computation step that can be highly
approximated, and a straightforward learning process that does not require
mining negative samples to contrast, eigendecompositions, or cumbersome
optimization solvers. By replacing PCA with TLDR, we are able to increase the
performance of GeM-AP by 4% mAP for 128 dimensions, and to retain its
performance with 16x fewer dimensions.
Related papers
- SimO Loss: Anchor-Free Contrastive Loss for Fine-Grained Supervised Contrastive Learning [0.0]
We introduce a novel anchor-free contrastive learning (L) method leveraging our proposed Similarity-Orthogonality (SimO) loss.
Our approach minimizes a semi-metric discriminative loss function that simultaneously optimize two key objectives.
We provide visualizations that demonstrate the impact of SimO loss on the embedding space.
arXiv Detail & Related papers (2024-10-07T17:41:10Z) - Unified Gradient-Based Machine Unlearning with Remain Geometry Enhancement [29.675650285351768]
Machine unlearning (MU) has emerged to enhance the privacy and trustworthiness of deep neural networks.
Approximate MU is a practical method for large-scale models.
We propose a fast-slow parameter update strategy to implicitly approximate the up-to-date salient unlearning direction.
arXiv Detail & Related papers (2024-09-29T15:17:33Z) - MOCA: Self-supervised Representation Learning by Predicting Masked Online Codebook Assignments [72.6405488990753]
Self-supervised learning can be used for mitigating the greedy needs of Vision Transformer networks.
We propose a single-stage and standalone method, MOCA, which unifies both desired properties.
We achieve new state-of-the-art results on low-shot settings and strong experimental results in various evaluation protocols.
arXiv Detail & Related papers (2023-07-18T15:46:20Z) - Meta-Learning Adversarial Bandit Algorithms [55.72892209124227]
We study online meta-learning with bandit feedback.
We learn to tune online mirror descent generalization (OMD) with self-concordant barrier regularizers.
arXiv Detail & Related papers (2023-07-05T13:52:10Z) - Black Box Few-Shot Adaptation for Vision-Language models [41.49584259596654]
Vision-Language (V-L) models trained with contrastive learning to align the visual and language modalities have been shown to be strong few-shot learners.
We describe a black-box method for V-L few-shot adaptation that operates on pre-computed image and text features.
We propose Linear Feature Alignment (LFA), a simple linear approach for V-L re-alignment in the target domain.
arXiv Detail & Related papers (2023-04-04T12:42:29Z) - Deep Active Learning Using Barlow Twins [0.0]
The generalisation performance of a convolutional neural networks (CNN) is majorly predisposed by the quantity, quality, and diversity of the training images.
The goal of the Active learning for the task is to draw most informative samples from the unlabeled pool.
We propose Deep Active Learning using BarlowTwins(DALBT), an active learning method for all the datasets.
arXiv Detail & Related papers (2022-12-30T12:39:55Z) - Effective Invertible Arbitrary Image Rescaling [77.46732646918936]
Invertible Neural Networks (INN) are able to increase upscaling accuracy significantly by optimizing the downscaling and upscaling cycle jointly.
A simple and effective invertible arbitrary rescaling network (IARN) is proposed to achieve arbitrary image rescaling by training only one model in this work.
It is shown to achieve a state-of-the-art (SOTA) performance in bidirectional arbitrary rescaling without compromising perceptual quality in LR outputs.
arXiv Detail & Related papers (2022-09-26T22:22:30Z) - Semi-Supervised Manifold Learning with Complexity Decoupled Chart Autoencoders [45.29194877564103]
This work introduces a chart autoencoder with an asymmetric encoding-decoding process that can incorporate additional semi-supervised information such as class labels.
We discuss the approximation power of such networks and derive a bound that essentially depends on the intrinsic dimension of the data manifold rather than the dimension of ambient space.
arXiv Detail & Related papers (2022-08-22T19:58:03Z) - Semi-supervised Domain Adaptive Structure Learning [72.01544419893628]
Semi-supervised domain adaptation (SSDA) is a challenging problem requiring methods to overcome both 1) overfitting towards poorly annotated data and 2) distribution shift across domains.
We introduce an adaptive structure learning method to regularize the cooperation of SSL and DA.
arXiv Detail & Related papers (2021-12-12T06:11:16Z) - Simple Stochastic and Online Gradient DescentAlgorithms for Pairwise
Learning [65.54757265434465]
Pairwise learning refers to learning tasks where the loss function depends on a pair instances.
Online descent (OGD) is a popular approach to handle streaming data in pairwise learning.
In this paper, we propose simple and online descent to methods for pairwise learning.
arXiv Detail & Related papers (2021-11-23T18:10:48Z) - Prototypical Contrastive Learning of Unsupervised Representations [171.3046900127166]
Prototypical Contrastive Learning (PCL) is an unsupervised representation learning method.
PCL implicitly encodes semantic structures of the data into the learned embedding space.
PCL outperforms state-of-the-art instance-wise contrastive learning methods on multiple benchmarks.
arXiv Detail & Related papers (2020-05-11T09:53:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.