A Unified Framework for Contrastive Learning from a Perspective of
Affinity Matrix
- URL: http://arxiv.org/abs/2211.14516v1
- Date: Sat, 26 Nov 2022 08:55:30 GMT
- Title: A Unified Framework for Contrastive Learning from a Perspective of
Affinity Matrix
- Authors: Wenbin Li, Meihao Kong, Xuesong Yang, Lei Wang, Jing Huo, Yang Gao,
Jiebo Luo
- Abstract summary: We present a new unified contrastive learning representation framework (named UniCLR) suitable for all the above four kinds of methods.
Three variants, i.e., SimAffinity, SimWhitening and SimTrace, are presented based on UniCLR.
In addition, a simple symmetric loss, as a new consistency regularization term, is proposed based on this framework.
- Score: 80.2675125037624
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, a variety of contrastive learning based unsupervised visual
representation learning methods have been designed and achieved great success
in many visual tasks. Generally, these methods can be roughly classified into
four categories: (1) standard contrastive methods with an InfoNCE like loss,
such as MoCo and SimCLR; (2) non-contrastive methods with only positive pairs,
such as BYOL and SimSiam; (3) whitening regularization based methods, such as
W-MSE and VICReg; and (4) consistency regularization based methods, such as
CO2. In this study, we present a new unified contrastive learning
representation framework (named UniCLR) suitable for all the above four kinds
of methods from a novel perspective of basic affinity matrix. Moreover, three
variants, i.e., SimAffinity, SimWhitening and SimTrace, are presented based on
UniCLR. In addition, a simple symmetric loss, as a new consistency
regularization term, is proposed based on this framework. By symmetrizing the
affinity matrix, we can effectively accelerate the convergence of the training
process. Extensive experiments have been conducted to show that (1) the
proposed UniCLR framework can achieve superior results on par with and even be
better than the state of the art, (2) the proposed symmetric loss can
significantly accelerate the convergence of models, and (3) SimTrace can avoid
the mode collapse problem by maximizing the trace of a whitened affinity matrix
without relying on asymmetry designs or stop-gradients.
Related papers
- Recursive Learning of Asymptotic Variational Objectives [49.69399307452126]
General state-space models (SSMs) are widely used in statistical machine learning and are among the most classical generative models for sequential time-series data.
Online sequential IWAE (OSIWAE) allows for online learning of both model parameters and a Markovian recognition model for inferring latent states.
This approach is more theoretically well-founded than recently proposed online variational SMC methods.
arXiv Detail & Related papers (2024-11-04T16:12:37Z) - The Common Stability Mechanism behind most Self-Supervised Learning
Approaches [64.40701218561921]
We provide a framework to explain the stability mechanism of different self-supervised learning techniques.
We discuss the working mechanism of contrastive techniques like SimCLR, non-contrastive techniques like BYOL, SWAV, SimSiam, Barlow Twins, and DINO.
We formulate different hypotheses and test them using the Imagenet100 dataset.
arXiv Detail & Related papers (2024-02-22T20:36:24Z) - Rethinking Graph Masked Autoencoders through Alignment and Uniformity [26.86368034133612]
Self-supervised learning on graphs can be bifurcated into contrastive and generative methods.
Recent advent of graph masked autoencoder (GraphMAE) rekindles momentum behind generative methods.
arXiv Detail & Related papers (2024-02-11T15:21:08Z) - A Theoretically Guaranteed Quaternion Weighted Schatten p-norm
Minimization Method for Color Image Restoration [11.47644299959152]
We propose a novel quaternion-based WSNM model (QWSNM) for tackling the color image restoration problems.
Extensive experiments on two representative CIR tasks, including color image denoising and deblurring, demonstrate that the proposed QWSNM method performs favorably against many state-of-the-art alternatives.
arXiv Detail & Related papers (2023-07-24T09:54:49Z) - Cross-Stream Contrastive Learning for Self-Supervised Skeleton-Based
Action Recognition [22.067143671631303]
Self-supervised skeleton-based action recognition enjoys a rapid growth along with the development of contrastive learning.
We propose a Cross-Stream Contrastive Learning framework for skeleton-based action Representation learning (CSCLR)
Specifically, the proposed CSCLR not only utilizes intra-stream contrast pairs, but introduces inter-stream contrast pairs as hard samples to formulate a better representation learning.
arXiv Detail & Related papers (2023-05-03T10:31:35Z) - Rethinking Clustering-Based Pseudo-Labeling for Unsupervised
Meta-Learning [146.11600461034746]
Method for unsupervised meta-learning, CACTUs, is a clustering-based approach with pseudo-labeling.
This approach is model-agnostic and can be combined with supervised algorithms to learn from unlabeled data.
We prove that the core reason for this is lack of a clustering-friendly property in the embedding space.
arXiv Detail & Related papers (2022-09-27T19:04:36Z) - Making Linear MDPs Practical via Contrastive Representation Learning [101.75885788118131]
It is common to address the curse of dimensionality in Markov decision processes (MDPs) by exploiting low-rank representations.
We consider an alternative definition of linear MDPs that automatically ensures normalization while allowing efficient representation learning.
We demonstrate superior performance over existing state-of-the-art model-based and model-free algorithms on several benchmarks.
arXiv Detail & Related papers (2022-07-14T18:18:02Z) - Contrastive and Non-Contrastive Self-Supervised Learning Recover Global
and Local Spectral Embedding Methods [19.587273175563745]
Self-Supervised Learning (SSL) surmises that inputs and pairwise positive relationships are enough to learn meaningful representations.
This paper proposes a unifying framework under the helm of spectral manifold learning to address those limitations.
arXiv Detail & Related papers (2022-05-23T17:59:32Z) - Refining Self-Supervised Learning in Imaging: Beyond Linear Metric [25.96406219707398]
We introduce in this paper a new statistical perspective, exploiting the Jaccard similarity metric, as a measure-based metric.
Specifically, our proposed metric may be interpreted as a dependence measure between two adapted projections learned from the so-called latent representations.
To the best of our knowledge, this effectively non-linearly fused information embedded in the Jaccard similarity, is novel to self-supervision learning with promising results.
arXiv Detail & Related papers (2022-02-25T19:25:05Z) - Multi-Objective Matrix Normalization for Fine-grained Visual Recognition [153.49014114484424]
Bilinear pooling achieves great success in fine-grained visual recognition (FGVC)
Recent methods have shown that the matrix power normalization can stabilize the second-order information in bilinear features.
We propose an efficient Multi-Objective Matrix Normalization (MOMN) method that can simultaneously normalize a bilinear representation.
arXiv Detail & Related papers (2020-03-30T08:40:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.