Addressing Sample Inefficiency in Multi-View Representation Learning
- URL: http://arxiv.org/abs/2312.10725v1
- Date: Sun, 17 Dec 2023 14:14:31 GMT
- Title: Addressing Sample Inefficiency in Multi-View Representation Learning
- Authors: Kumar Krishna Agrawal, Arna Ghosh, Adam Oberman, Blake Richards
- Abstract summary: Non-contrastive self-supervised learning (NC-SSL) methods have shown great promise for label-free representation learning in computer vision.
We provide theoretical insights on the implicit bias of the BarlowTwins and VICReg loss that can explain theses and guide the development of more principled recommendations.
- Score: 6.621303125642322
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Non-contrastive self-supervised learning (NC-SSL) methods like BarlowTwins
and VICReg have shown great promise for label-free representation learning in
computer vision. Despite the apparent simplicity of these techniques,
researchers must rely on several empirical heuristics to achieve competitive
performance, most notably using high-dimensional projector heads and two
augmentations of the same image. In this work, we provide theoretical insights
on the implicit bias of the BarlowTwins and VICReg loss that can explain these
heuristics and guide the development of more principled recommendations. Our
first insight is that the orthogonality of the features is more critical than
projector dimensionality for learning good representations. Based on this, we
empirically demonstrate that low-dimensional projector heads are sufficient
with appropriate regularization, contrary to the existing heuristic. Our second
theoretical insight suggests that using multiple data augmentations better
represents the desiderata of the SSL objective. Based on this, we demonstrate
that leveraging more augmentations per sample improves representation quality
and trainability. In particular, it improves optimization convergence, leading
to better features emerging earlier in the training. Remarkably, we demonstrate
that we can reduce the pretraining dataset size by up to 4x while maintaining
accuracy and improving convergence simply by using more data augmentations.
Combining these insights, we present practical pretraining recommendations that
improve wall-clock time by 2x and improve performance on CIFAR-10/STL-10
datasets using a ResNet-50 backbone. Thus, this work provides a theoretical
insight into NC-SSL and produces practical recommendations for enhancing its
sample and compute efficiency.
Related papers
- Dynamic Loss-Based Sample Reweighting for Improved Large Language Model Pretraining [55.262510814326035]
Existing reweighting strategies primarily focus on group-level data importance.
We introduce novel algorithms for dynamic, instance-level data reweighting.
Our framework allows us to devise reweighting strategies deprioritizing redundant or uninformative data.
arXiv Detail & Related papers (2025-02-10T17:57:15Z) - Towards Robust Out-of-Distribution Generalization: Data Augmentation and Neural Architecture Search Approaches [4.577842191730992]
We study ways toward robust OoD generalization for deep learning.
We first propose a novel and effective approach to disentangle the spurious correlation between features that are not essential for recognition.
We then study the problem of strengthening neural architecture search in OoD scenarios.
arXiv Detail & Related papers (2024-10-25T20:50:32Z) - On Improving the Algorithm-, Model-, and Data- Efficiency of Self-Supervised Learning [18.318758111829386]
We propose an efficient single-branch SSL method based on non-parametric instance discrimination.
We also propose a novel self-distillation loss that minimizes the KL divergence between the probability distribution and its square root version.
arXiv Detail & Related papers (2024-04-30T06:39:04Z) - Random Linear Projections Loss for Hyperplane-Based Optimization in Neural Networks [22.348887008547653]
This work introduces Random Linear Projections (RLP) loss, a novel approach that enhances training efficiency by leveraging geometric relationships within the data.
Our empirical evaluations, conducted across benchmark datasets and synthetic examples, demonstrate that neural networks trained with RLP loss outperform those trained with traditional loss functions.
arXiv Detail & Related papers (2023-11-21T05:22:39Z) - Gradient constrained sharpness-aware prompt learning for vision-language
models [99.74832984957025]
This paper targets a novel trade-off problem in generalizable prompt learning for vision-language models (VLM)
By analyzing the loss landscapes of the state-of-the-art method and vanilla Sharpness-aware Minimization (SAM) based method, we conclude that the trade-off performance correlates to both loss value and loss sharpness.
We propose a novel SAM-based method for prompt learning, denoted as Gradient Constrained Sharpness-aware Context Optimization (GCSCoOp)
arXiv Detail & Related papers (2023-09-14T17:13:54Z) - Towards a Better Theoretical Understanding of Independent Subnetwork Training [56.24689348875711]
We take a closer theoretical look at Independent Subnetwork Training (IST)
IST is a recently proposed and highly effective technique for solving the aforementioned problems.
We identify fundamental differences between IST and alternative approaches, such as distributed methods with compressed communication.
arXiv Detail & Related papers (2023-06-28T18:14:22Z) - Unifying Synergies between Self-supervised Learning and Dynamic
Computation [53.66628188936682]
We present a novel perspective on the interplay between SSL and DC paradigms.
We show that it is feasible to simultaneously learn a dense and gated sub-network from scratch in a SSL setting.
The co-evolution during pre-training of both dense and gated encoder offers a good accuracy-efficiency trade-off.
arXiv Detail & Related papers (2023-01-22T17:12:58Z) - Does Self-supervised Learning Really Improve Reinforcement Learning from
Pixels? [42.404871049605084]
We extend the contrastive reinforcement learning framework (e.g., CURL) that jointly optimize SSL and RL losses.
Our observations suggest that the existing SSL framework for RL fails to bring meaningful improvement over the baselines.
We evaluate the approach in multiple different environments including a real-world robot environment.
arXiv Detail & Related papers (2022-06-10T17:59:30Z) - Revisiting Consistency Regularization for Semi-Supervised Learning [80.28461584135967]
We propose an improved consistency regularization framework by a simple yet effective technique, FeatDistLoss.
Experimental results show that our model defines a new state of the art for various datasets and settings.
arXiv Detail & Related papers (2021-12-10T20:46:13Z) - Extrapolation for Large-batch Training in Deep Learning [72.61259487233214]
We show that a host of variations can be covered in a unified framework that we propose.
We prove the convergence of this novel scheme and rigorously evaluate its empirical performance on ResNet, LSTM, and Transformer.
arXiv Detail & Related papers (2020-06-10T08:22:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.