Contrastive Bayesian Analysis for Deep Metric Learning
- URL: http://arxiv.org/abs/2210.04402v1
- Date: Mon, 10 Oct 2022 02:24:21 GMT
- Title: Contrastive Bayesian Analysis for Deep Metric Learning
- Authors: Shichao Kan, Zhiquan He, Yigang Cen, Yang Li, Mladenovic Vladimir,
Zhihai He
- Abstract summary: We develop a contrastive Bayesian analysis to characterize and model the posterior probabilities of image labels conditioned by their features similarity.
This contrastive Bayesian analysis leads to a new loss function for deep metric learning.
Our experimental results and ablation studies demonstrate that the proposed contrastive Bayesian metric learning method significantly improves the performance of deep metric learning.
- Score: 30.21464199249958
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Recent methods for deep metric learning have been focusing on designing
different contrastive loss functions between positive and negative pairs of
samples so that the learned feature embedding is able to pull positive samples
of the same class closer and push negative samples from different classes away
from each other. In this work, we recognize that there is a significant
semantic gap between features at the intermediate feature layer and class
labels at the final output layer. To bridge this gap, we develop a contrastive
Bayesian analysis to characterize and model the posterior probabilities of
image labels conditioned by their features similarity in a contrastive learning
setting. This contrastive Bayesian analysis leads to a new loss function for
deep metric learning. To improve the generalization capability of the proposed
method onto new classes, we further extend the contrastive Bayesian loss with a
metric variance constraint. Our experimental results and ablation studies
demonstrate that the proposed contrastive Bayesian metric learning method
significantly improves the performance of deep metric learning in both
supervised and pseudo-supervised scenarios, outperforming existing methods by a
large margin.
Related papers
- Similarity-Dissimilarity Loss with Supervised Contrastive Learning for Multi-label Classification [11.499489446062054]
We propose a Similarity-Dissimilarity Loss with contrastive learning for multi-label classification.
Our proposed loss effectively improves the performance on all encoders under supervised contrastive learning paradigm.
arXiv Detail & Related papers (2024-10-17T11:12:55Z) - Bayesian Learning-driven Prototypical Contrastive Loss for Class-Incremental Learning [42.14439854721613]
We propose a prototypical network with a Bayesian learning-driven contrastive loss (BLCL) tailored specifically for class-incremental learning scenarios.
Our approach dynamically adapts the balance between the cross-entropy and contrastive loss functions with a Bayesian learning technique.
arXiv Detail & Related papers (2024-05-17T19:49:02Z) - KDMCSE: Knowledge Distillation Multimodal Sentence Embeddings with Adaptive Angular margin Contrastive Learning [31.139620652818838]
We propose KDMCSE, a novel approach that enhances the discrimination and generalizability of multimodal representation.
We also introduce a new contrastive objective, AdapACSE, that enhances the discriminative representation by strengthening the margin within the angular space.
arXiv Detail & Related papers (2024-03-26T08:32:39Z) - Identical and Fraternal Twins: Fine-Grained Semantic Contrastive
Learning of Sentence Representations [6.265789210037749]
We introduce a novel Identical and Fraternal Twins of Contrastive Learning framework, capable of simultaneously adapting to various positive pairs generated by different augmentation techniques.
We also present proof-of-concept experiments combined with the contrastive objective to prove the validity of the proposed Twins Loss.
arXiv Detail & Related papers (2023-07-20T15:02:42Z) - Understanding Contrastive Learning Requires Incorporating Inductive
Biases [64.56006519908213]
Recent attempts to theoretically explain the success of contrastive learning on downstream tasks prove guarantees depending on properties of em augmentations and the value of em contrastive loss of representations.
We demonstrate that such analyses ignore em inductive biases of the function class and training algorithm, even em provably leading to vacuous guarantees in some settings.
arXiv Detail & Related papers (2022-02-28T18:59:20Z) - Adaptive Affinity Loss and Erroneous Pseudo-Label Refinement for Weakly
Supervised Semantic Segmentation [48.294903659573585]
In this paper, we propose to embed affinity learning of multi-stage approaches in a single-stage model.
A deep neural network is used to deliver comprehensive semantic information in the training phase.
Experiments are conducted on the PASCAL VOC 2012 dataset to evaluate the effectiveness of our proposed approach.
arXiv Detail & Related papers (2021-08-03T07:48:33Z) - Provable Guarantees for Self-Supervised Deep Learning with Spectral
Contrastive Loss [72.62029620566925]
Recent works in self-supervised learning have advanced the state-of-the-art by relying on the contrastive learning paradigm.
Our work analyzes contrastive learning without assuming conditional independence of positive pairs.
We propose a loss that performs spectral decomposition on the population augmentation graph and can be succinctly written as a contrastive learning objective.
arXiv Detail & Related papers (2021-06-08T07:41:02Z) - Incremental False Negative Detection for Contrastive Learning [95.68120675114878]
We introduce a novel incremental false negative detection for self-supervised contrastive learning.
During contrastive learning, we discuss two strategies to explicitly remove the detected false negatives.
Our proposed method outperforms other self-supervised contrastive learning frameworks on multiple benchmarks within a limited compute.
arXiv Detail & Related papers (2021-06-07T15:29:14Z) - Solving Inefficiency of Self-supervised Representation Learning [87.30876679780532]
Existing contrastive learning methods suffer from very low learning efficiency.
Under-clustering and over-clustering problems are major obstacles to learning efficiency.
We propose a novel self-supervised learning framework using a median triplet loss.
arXiv Detail & Related papers (2021-04-18T07:47:10Z) - Doubly Contrastive Deep Clustering [135.7001508427597]
We present a novel Doubly Contrastive Deep Clustering (DCDC) framework, which constructs contrastive loss over both sample and class views.
Specifically, for the sample view, we set the class distribution of the original sample and its augmented version as positive sample pairs.
For the class view, we build the positive and negative pairs from the sample distribution of the class.
In this way, two contrastive losses successfully constrain the clustering results of mini-batch samples in both sample and class level.
arXiv Detail & Related papers (2021-03-09T15:15:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.