Learning the Relation between Similarity Loss and Clustering Loss in
Self-Supervised Learning
- URL: http://arxiv.org/abs/2301.03041v2
- Date: Mon, 5 Jun 2023 11:58:22 GMT
- Title: Learning the Relation between Similarity Loss and Clustering Loss in
Self-Supervised Learning
- Authors: Jidong Ge, Yuxiang Liu, Jie Gui, Lanting Fang, Ming Lin, James Tin-Yau
Kwok, LiGuo Huang, Bin Luo
- Abstract summary: Self-supervised learning enables networks to learn discriminative features from massive data itself.
We analyze the relation between similarity loss and feature-level cross-entropy loss.
We provide theoretical analyses and experiments to show that a suitable combination of these two losses can get state-of-the-art results.
- Score: 27.91000553525992
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Self-supervised learning enables networks to learn discriminative features
from massive data itself. Most state-of-the-art methods maximize the similarity
between two augmentations of one image based on contrastive learning. By
utilizing the consistency of two augmentations, the burden of manual
annotations can be freed. Contrastive learning exploits instance-level
information to learn robust features. However, the learned information is
probably confined to different views of the same instance. In this paper, we
attempt to leverage the similarity between two distinct images to boost
representation in self-supervised learning. In contrast to instance-level
information, the similarity between two distinct images may provide more useful
information. Besides, we analyze the relation between similarity loss and
feature-level cross-entropy loss. These two losses are essential for most deep
learning methods. However, the relation between these two losses is not clear.
Similarity loss helps obtain instance-level representation, while feature-level
cross-entropy loss helps mine the similarity between two distinct images. We
provide theoretical analyses and experiments to show that a suitable
combination of these two losses can get state-of-the-art results. Code is
available at https://github.com/guijiejie/ICCL.
Related papers
- Towards Multi-view Graph Anomaly Detection with Similarity-Guided Contrastive Clustering [35.1801853090859]
Anomaly detection on graphs plays an important role in many real-world applications.
We propose an autoencoder-based clustering framework regularized by a similarity-guided contrastive loss to detect anomalous nodes.
arXiv Detail & Related papers (2024-09-15T15:41:59Z) - LeOCLR: Leveraging Original Images for Contrastive Learning of Visual Representations [4.680881326162484]
Contrastive instance discrimination methods outperform supervised learning in downstream tasks such as image classification and object detection.
A common augmentation technique in contrastive learning is random cropping followed by resizing.
We introduce LeOCLR, a framework that employs a novel instance discrimination approach and an adapted loss function.
arXiv Detail & Related papers (2024-03-11T15:33:32Z) - Do Different Deep Metric Learning Losses Lead to Similar Learned
Features? [4.043200001974071]
We compare 14 pretrained models from a recent study and find that, even though all models perform similarly, different loss functions can guide the model to learn different features.
Our analysis also shows that some seemingly irrelevant properties can have significant influence on the resulting embedding.
arXiv Detail & Related papers (2022-05-05T15:07:19Z) - Attributable Visual Similarity Learning [90.69718495533144]
This paper proposes an attributable visual similarity learning (AVSL) framework for a more accurate and explainable similarity measure between images.
Motivated by the human semantic similarity cognition, we propose a generalized similarity learning paradigm to represent the similarity between two images with a graph.
Experiments on the CUB-200-2011, Cars196, and Stanford Online Products datasets demonstrate significant improvements over existing deep similarity learning methods.
arXiv Detail & Related papers (2022-03-28T17:35:31Z) - Dissecting the impact of different loss functions with gradient surgery [7.001832294837659]
Pair-wise loss is an approach to metric learning that learns a semantic embedding by optimizing a loss function.
Here we decompose the gradient of these loss functions into components that relate to how they push the relative feature positions of the anchor-positive and anchor-negative pairs.
arXiv Detail & Related papers (2022-01-27T03:55:48Z) - Contrastive Feature Loss for Image Prediction [55.373404869092866]
Training supervised image synthesis models requires a critic to compare two images: the ground truth to the result.
We introduce an information theory based approach to measuring similarity between two images.
We show that our formulation boosts the perceptual realism of output images when used as a drop-in replacement for the L1 loss.
arXiv Detail & Related papers (2021-11-12T20:39:52Z) - Few-shot Visual Reasoning with Meta-analogical Contrastive Learning [141.2562447971]
We propose to solve a few-shot (or low-shot) visual reasoning problem, by resorting to analogical reasoning.
We extract structural relationships between elements in both domains, and enforce them to be as similar as possible with analogical learning.
We validate our method on RAVEN dataset, on which it outperforms state-of-the-art method, with larger gains when the training data is scarce.
arXiv Detail & Related papers (2020-07-23T14:00:34Z) - Unsupervised Landmark Learning from Unpaired Data [117.81440795184587]
Recent attempts for unsupervised landmark learning leverage synthesized image pairs that are similar in appearance but different in poses.
We propose a cross-image cycle consistency framework which applies the swapping-reconstruction strategy twice to obtain the final supervision.
Our proposed framework is shown to outperform strong baselines by a large margin.
arXiv Detail & Related papers (2020-06-29T13:57:20Z) - Pairwise Supervision Can Provably Elicit a Decision Boundary [84.58020117487898]
Similarity learning is a problem to elicit useful representations by predicting the relationship between a pair of patterns.
We show that similarity learning is capable of solving binary classification by directly eliciting a decision boundary.
arXiv Detail & Related papers (2020-06-11T05:35:16Z) - Learning to Compare Relation: Semantic Alignment for Few-Shot Learning [48.463122399494175]
We present a novel semantic alignment model to compare relations, which is robust to content misalignment.
We conduct extensive experiments on several few-shot learning datasets.
arXiv Detail & Related papers (2020-02-29T08:37:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.