Refining Self-Supervised Learning in Imaging: Beyond Linear Metric
- URL: http://arxiv.org/abs/2202.12921v1
- Date: Fri, 25 Feb 2022 19:25:05 GMT
- Title: Refining Self-Supervised Learning in Imaging: Beyond Linear Metric
- Authors: Bo Jiang, Hamid Krim, Tianfu Wu, Derya Cansever
- Abstract summary: We introduce in this paper a new statistical perspective, exploiting the Jaccard similarity metric, as a measure-based metric.
Specifically, our proposed metric may be interpreted as a dependence measure between two adapted projections learned from the so-called latent representations.
To the best of our knowledge, this effectively non-linearly fused information embedded in the Jaccard similarity, is novel to self-supervision learning with promising results.
- Score: 25.96406219707398
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We introduce in this paper a new statistical perspective, exploiting the
Jaccard similarity metric, as a measure-based metric to effectively invoke
non-linear features in the loss of self-supervised contrastive learning.
Specifically, our proposed metric may be interpreted as a dependence measure
between two adapted projections learned from the so-called latent
representations. This is in contrast to the cosine similarity measure in the
conventional contrastive learning model, which accounts for correlation
information. To the best of our knowledge, this effectively non-linearly fused
information embedded in the Jaccard similarity, is novel to self-supervision
learning with promising results. The proposed approach is compared to two
state-of-the-art self-supervised contrastive learning methods on three image
datasets. We not only demonstrate its amenable applicability in current ML
problems, but also its improved performance and training efficiency.
Related papers
- A Probabilistic Model Behind Self-Supervised Learning [53.64989127914936]
In self-supervised learning (SSL), representations are learned via an auxiliary task without annotated labels.
We present a generative latent variable model for self-supervised learning.
We show that several families of discriminative SSL, including contrastive methods, induce a comparable distribution over representations.
arXiv Detail & Related papers (2024-02-02T13:31:17Z) - From Pretext to Purpose: Batch-Adaptive Self-Supervised Learning [32.18543787821028]
This paper proposes an adaptive technique of batch fusion for self-supervised contrastive learning.
It achieves state-of-the-art performance under equitable comparisons.
We suggest that the proposed method may contribute to the advancement of data-driven self-supervised learning research.
arXiv Detail & Related papers (2023-11-16T15:47:49Z) - Weak Augmentation Guided Relational Self-Supervised Learning [80.0680103295137]
We introduce a novel relational self-supervised learning (ReSSL) framework that learns representations by modeling the relationship between different instances.
Our proposed method employs sharpened distribution of pairwise similarities among different instances as textitrelation metric.
Experimental results show that our proposed ReSSL substantially outperforms the state-of-the-art methods across different network architectures.
arXiv Detail & Related papers (2022-03-16T16:14:19Z) - Integrating Contrastive Learning with Dynamic Models for Reinforcement
Learning from Images [31.413588478694496]
We argue that explicitly improving Markovianity of the learned embedding is desirable.
We propose a self-supervised representation learning method which integrates contrastive learning with dynamic models.
arXiv Detail & Related papers (2022-03-02T14:39:17Z) - Adaptive Hierarchical Similarity Metric Learning with Noisy Labels [138.41576366096137]
We propose an Adaptive Hierarchical Similarity Metric Learning method.
It considers two noise-insensitive information, textiti.e., class-wise divergence and sample-wise consistency.
Our method achieves state-of-the-art performance compared with current deep metric learning approaches.
arXiv Detail & Related papers (2021-10-29T02:12:18Z) - ReSSL: Relational Self-Supervised Learning with Weak Augmentation [68.47096022526927]
Self-supervised learning has achieved great success in learning visual representations without data annotations.
We introduce a novel relational SSL paradigm that learns representations by modeling the relationship between different instances.
Our proposed ReSSL significantly outperforms the previous state-of-the-art algorithms in terms of both performance and training efficiency.
arXiv Detail & Related papers (2021-07-20T06:53:07Z) - Learning from Similarity-Confidence Data [94.94650350944377]
We investigate a novel weakly supervised learning problem of learning from similarity-confidence (Sconf) data.
We propose an unbiased estimator of the classification risk that can be calculated from only Sconf data and show that the estimation error bound achieves the optimal convergence rate.
arXiv Detail & Related papers (2021-02-13T07:31:16Z) - Self-supervised Co-training for Video Representation Learning [103.69904379356413]
We investigate the benefit of adding semantic-class positives to instance-based Info Noise Contrastive Estimation training.
We propose a novel self-supervised co-training scheme to improve the popular infoNCE loss.
We evaluate the quality of the learnt representation on two different downstream tasks: action recognition and video retrieval.
arXiv Detail & Related papers (2020-10-19T17:59:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.