Ranking Info Noise Contrastive Estimation: Boosting Contrastive Learning
via Ranked Positives
- URL: http://arxiv.org/abs/2201.11736v1
- Date: Thu, 27 Jan 2022 18:55:32 GMT
- Title: Ranking Info Noise Contrastive Estimation: Boosting Contrastive Learning
via Ranked Positives
- Authors: David T. Hoffmann, Nadine Behrmann, Juergen Gall, Thomas Brox, Mehdi
Noroozi
- Abstract summary: RINCE can exploit information about a similarity ranking for learning a corresponding embedding space.
We show that RINCE learns favorable embeddings compared to the standard InfoNCE whenever at least noisy ranking information can be obtained.
- Score: 44.962289510218646
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper introduces Ranking Info Noise Contrastive Estimation (RINCE), a
new member in the family of InfoNCE losses that preserves a ranked ordering of
positive samples. In contrast to the standard InfoNCE loss, which requires a
strict binary separation of the training pairs into similar and dissimilar
samples, RINCE can exploit information about a similarity ranking for learning
a corresponding embedding space. We show that the proposed loss function learns
favorable embeddings compared to the standard InfoNCE whenever at least noisy
ranking information can be obtained or when the definition of positives and
negatives is blurry. We demonstrate this for a supervised classification task
with additional superclass labels and noisy similarity scores. Furthermore, we
show that RINCE can also be applied to unsupervised training with experiments
on unsupervised representation learning from videos. In particular, the
embedding yields higher classification accuracy, retrieval rates and performs
better in out-of-distribution detection than the standard InfoNCE loss.
Related papers
- Rank Supervised Contrastive Learning for Time Series Classification [17.302643963704643]
We present Rank Supervised Contrastive Learning (RankSCL) to perform time series classification.
RankSCL augments raw data in a targeted way in the embedding space.
A novel rank loss is developed to assign different weights for different levels of positive samples.
arXiv Detail & Related papers (2024-01-31T18:29:10Z) - Noisy Correspondence Learning with Self-Reinforcing Errors Mitigation [63.180725016463974]
Cross-modal retrieval relies on well-matched large-scale datasets that are laborious in practice.
We introduce a novel noisy correspondence learning framework, namely textbfSelf-textbfReinforcing textbfErrors textbfMitigation (SREM)
arXiv Detail & Related papers (2023-12-27T09:03:43Z) - SINCERE: Supervised Information Noise-Contrastive Estimation REvisited [5.004880836963827]
Previous work suggests a supervised contrastive (SupCon) loss to extend InfoNCE to learn from available class labels.
We propose the Supervised InfoNCE REvisited (SINCERE) loss as a theoretically-justified supervised extension of InfoNCE.
Experiments show that SINCERE leads to better separation of embeddings from different classes and improves transfer learning classification accuracy.
arXiv Detail & Related papers (2023-09-25T16:40:56Z) - InfoNCE Loss Provably Learns Cluster-Preserving Representations [54.28112623495274]
We show that the representation learned by InfoNCE with a finite number of negative samples is consistent with respect to clusters in the data.
Our main result is to show that the representation learned by InfoNCE with a finite number of negative samples is also consistent with respect to clusters in the data.
arXiv Detail & Related papers (2023-02-15T19:45:35Z) - Mutual Information Learned Classifiers: an Information-theoretic
Viewpoint of Training Deep Learning Classification Systems [9.660129425150926]
Cross entropy loss can easily lead us to find models which demonstrate severe overfitting behavior.
In this paper, we prove that the existing cross entropy loss minimization for training DNN classifiers essentially learns the conditional entropy of the underlying data distribution.
We propose a mutual information learning framework where we train DNN classifiers via learning the mutual information between the label and input.
arXiv Detail & Related papers (2022-10-03T15:09:19Z) - Mutual Information Learned Classifiers: an Information-theoretic
Viewpoint of Training Deep Learning Classification Systems [9.660129425150926]
We show that the existing cross entropy loss minimization problem essentially learns the label conditional entropy of the underlying data distribution.
We propose a mutual information learning framework where we train deep neural network classifiers via learning the mutual information between the label and the input.
arXiv Detail & Related papers (2022-09-21T01:06:30Z) - Incorporating Semi-Supervised and Positive-Unlabeled Learning for
Boosting Full Reference Image Quality Assessment [73.61888777504377]
Full-reference (FR) image quality assessment (IQA) evaluates the visual quality of a distorted image by measuring its perceptual difference with pristine-quality reference.
Unlabeled data can be easily collected from an image degradation or restoration process, making it encouraging to exploit unlabeled training data to boost FR-IQA performance.
In this paper, we suggest to incorporate semi-supervised and positive-unlabeled (PU) learning for exploiting unlabeled data while mitigating the adverse effect of outliers.
arXiv Detail & Related papers (2022-04-19T09:10:06Z) - Prototypical Classifier for Robust Class-Imbalanced Learning [64.96088324684683]
We propose textitPrototypical, which does not require fitting additional parameters given the embedding network.
Prototypical produces balanced and comparable predictions for all classes even though the training set is class-imbalanced.
We test our method on CIFAR-10LT, CIFAR-100LT and Webvision datasets, observing that Prototypical obtains substaintial improvements compared with state of the arts.
arXiv Detail & Related papers (2021-10-22T01:55:01Z) - Learning with Out-of-Distribution Data for Audio Classification [60.48251022280506]
We show that detecting and relabelling certain OOD instances, rather than discarding them, can have a positive effect on learning.
The proposed method is shown to improve the performance of convolutional neural networks by a significant margin.
arXiv Detail & Related papers (2020-02-11T21:08:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.