Adaptive Similarity Bootstrapping for Self-Distillation based
Representation Learning
- URL: http://arxiv.org/abs/2303.13606v2
- Date: Thu, 7 Sep 2023 14:21:10 GMT
- Title: Adaptive Similarity Bootstrapping for Self-Distillation based
Representation Learning
- Authors: Tim Lebailly, Thomas Stegm\"uller, Behzad Bozorgtabar, Jean-Philippe
Thiran, Tinne Tuytelaars
- Abstract summary: NNCLR goes beyond the cross-view paradigm and uses positive pairs from different images obtained via nearest neighbor bootstrapping in a contrastive setting.
We empirically show that as opposed to the contrastive learning setting which relies on negative samples, incorporating nearest neighbor bootstrapping in a self-distillation scheme can lead to a performance drop or even collapse.
We propose to adaptively bootstrap neighbors based on the estimated quality of the latent space.
- Score: 40.94237853380154
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Most self-supervised methods for representation learning leverage a
cross-view consistency objective i.e., they maximize the representation
similarity of a given image's augmented views. Recent work NNCLR goes beyond
the cross-view paradigm and uses positive pairs from different images obtained
via nearest neighbor bootstrapping in a contrastive setting. We empirically
show that as opposed to the contrastive learning setting which relies on
negative samples, incorporating nearest neighbor bootstrapping in a
self-distillation scheme can lead to a performance drop or even collapse. We
scrutinize the reason for this unexpected behavior and provide a solution. We
propose to adaptively bootstrap neighbors based on the estimated quality of the
latent space. We report consistent improvements compared to the naive
bootstrapping approach and the original baselines. Our approach leads to
performance improvements for various self-distillation method/backbone
combinations and standard downstream tasks. Our code is publicly available at
https://github.com/tileb1/AdaSim.
Related papers
- Discriminative Anchor Learning for Efficient Multi-view Clustering [59.11406089896875]
We propose discriminative anchor learning for multi-view clustering (DALMC)
We learn discriminative view-specific feature representations according to the original dataset.
We build anchors from different views based on these representations, which increase the quality of the shared anchor graph.
arXiv Detail & Related papers (2024-09-25T13:11:17Z) - Bootstrap Latents of Nodes and Neighbors for Graph Self-Supervised Learning [27.278097015083343]
Contrastive learning requires negative samples to prevent model collapse and learn discriminative representations.
We introduce a cross-attention module to predict the supportiveness score of a neighbor with respect to the anchor node.
Our method mitigates class collision from negative and noisy positive samples, concurrently enhancing intra-class compactness.
arXiv Detail & Related papers (2024-08-09T14:17:52Z) - Unsupervised Representation Learning by Balanced Self Attention Matching [2.3020018305241337]
We present a self-supervised method for embedding image features called BAM.
We obtain rich representations and avoid feature collapse by minimizing a loss that matches these distributions to their globally balanced and entropy regularized version.
We show competitive performance with leading methods on both semi-supervised and transfer-learning benchmarks.
arXiv Detail & Related papers (2024-08-04T12:52:44Z) - Asymmetric Patch Sampling for Contrastive Learning [17.922853312470398]
Asymmetric appearance between positive pair effectively reduces the risk of representation degradation in contrastive learning.
We propose a novel asymmetric patch sampling strategy for contrastive learning, to boost the appearance asymmetry for better representations.
arXiv Detail & Related papers (2023-06-05T13:10:48Z) - Sample and Predict Your Latent: Modality-free Sequential Disentanglement
via Contrastive Estimation [2.7759072740347017]
We introduce a self-supervised sequential disentanglement framework based on contrastive estimation with no external signals.
In practice, we propose a unified, efficient, and easy-to-code sampling strategy for semantically similar and dissimilar views of the data.
Our method presents state-of-the-art results in comparison to existing techniques.
arXiv Detail & Related papers (2023-05-25T10:50:30Z) - Learnable Pillar-based Re-ranking for Image-Text Retrieval [119.9979224297237]
Image-text retrieval aims to bridge the modality gap and retrieve cross-modal content based on semantic similarities.
Re-ranking, a popular post-processing practice, has revealed the superiority of capturing neighbor relations in single-modality retrieval tasks.
We propose a novel learnable pillar-based re-ranking paradigm for image-text retrieval.
arXiv Detail & Related papers (2023-04-25T04:33:27Z) - Non-contrastive representation learning for intervals from well logs [58.70164460091879]
The representation learning problem in the oil & gas industry aims to construct a model that provides a representation based on logging data for a well interval.
One of the possible approaches is self-supervised learning (SSL)
We are the first to introduce non-contrastive SSL for well-logging data.
arXiv Detail & Related papers (2022-09-28T13:27:10Z) - Chaos is a Ladder: A New Theoretical Understanding of Contrastive
Learning via Augmentation Overlap [64.60460828425502]
We propose a new guarantee on the downstream performance of contrastive learning.
Our new theory hinges on the insight that the support of different intra-class samples will become more overlapped under aggressive data augmentations.
We propose an unsupervised model selection metric ARC that aligns well with downstream accuracy.
arXiv Detail & Related papers (2022-03-25T05:36:26Z) - Robust Contrastive Learning against Noisy Views [79.71880076439297]
We propose a new contrastive loss function that is robust against noisy views.
We show that our approach provides consistent improvements over the state-of-the-art image, video, and graph contrastive learning benchmarks.
arXiv Detail & Related papers (2022-01-12T05:24:29Z) - QK Iteration: A Self-Supervised Representation Learning Algorithm for
Image Similarity [0.0]
We present a new contrastive self-supervised representation learning algorithm in the context of Copy Detection in the 2021 Image Similarity Challenge hosted by Facebook AI Research.
Our algorithms achieved a micro-AP score of 0.3401 on the Phase 1 leaderboard, significantly improving over the baseline $mu$AP of 0.1556.
arXiv Detail & Related papers (2021-11-15T18:01:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.