A Simple Framework for Uncertainty in Contrastive Learning
- URL: http://arxiv.org/abs/2010.02038v1
- Date: Mon, 5 Oct 2020 14:17:42 GMT
- Title: A Simple Framework for Uncertainty in Contrastive Learning
- Authors: Mike Wu, Noah Goodman
- Abstract summary: We introduce a simple approach that learns to assign uncertainty for pretrained contrastive representations.
We train a deep network from a representation to a distribution in representation space, whose variance can be used as a measure of confidence.
In our experiments, we show that this deep uncertainty model can be used (1) to visually interpret model behavior, (2) to detect new noise in the input to deployed models, (3) to detect anomalies, where we outperform 10 baseline methods across 11 tasks with improvements of up to 14% absolute.
- Score: 11.64841553345271
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Contrastive approaches to representation learning have recently shown great
promise. In contrast to generative approaches, these contrastive models learn a
deterministic encoder with no notion of uncertainty or confidence. In this
paper, we introduce a simple approach based on "contrasting distributions" that
learns to assign uncertainty for pretrained contrastive representations. In
particular, we train a deep network from a representation to a distribution in
representation space, whose variance can be used as a measure of confidence. In
our experiments, we show that this deep uncertainty model can be used (1) to
visually interpret model behavior, (2) to detect new noise in the input to
deployed models, (3) to detect anomalies, where we outperform 10 baseline
methods across 11 tasks with improvements of up to 14% absolute, and (4) to
classify out-of-distribution examples where our fully unsupervised model is
competitive with supervised methods.
Related papers
- An Ambiguity Measure for Recognizing the Unknowns in Deep Learning [0.0]
We study the understanding of deep neural networks from the scope in which they are trained on.
We propose a measure for quantifying the ambiguity of inputs for any given model.
arXiv Detail & Related papers (2023-12-11T02:57:12Z) - Reliability-Aware Prediction via Uncertainty Learning for Person Image
Retrieval [51.83967175585896]
UAL aims at providing reliability-aware predictions by considering data uncertainty and model uncertainty simultaneously.
Data uncertainty captures the noise" inherent in the sample, while model uncertainty depicts the model's confidence in the sample's prediction.
arXiv Detail & Related papers (2022-10-24T17:53:20Z) - Uncertainty in Contrastive Learning: On the Predictability of Downstream
Performance [7.411571833582691]
We study whether the uncertainty of such a representation can be quantified for a single datapoint in a meaningful way.
We show that this goal can be achieved by directly estimating the distribution of the training data in the embedding space.
arXiv Detail & Related papers (2022-07-19T15:44:59Z) - Contrastive Learning for Fair Representations [50.95604482330149]
Trained classification models can unintentionally lead to biased representations and predictions.
Existing debiasing methods for classification models, such as adversarial training, are often expensive to train and difficult to optimise.
We propose a method for mitigating bias by incorporating contrastive learning, in which instances sharing the same class label are encouraged to have similar representations.
arXiv Detail & Related papers (2021-09-22T10:47:51Z) - A Low Rank Promoting Prior for Unsupervised Contrastive Learning [108.91406719395417]
We construct a novel probabilistic graphical model that effectively incorporates the low rank promoting prior into the framework of contrastive learning.
Our hypothesis explicitly requires that all the samples belonging to the same instance class lie on the same subspace with small dimension.
Empirical evidences show that the proposed algorithm clearly surpasses the state-of-the-art approaches on multiple benchmarks.
arXiv Detail & Related papers (2021-08-05T15:58:25Z) - An Effective Baseline for Robustness to Distributional Shift [5.627346969563955]
Refraining from confidently predicting when faced with categories of inputs different from those seen during training is an important requirement for the safe deployment of deep learning systems.
We present a simple, but highly effective approach to deal with out-of-distribution detection that uses the principle of abstention.
arXiv Detail & Related papers (2021-05-15T00:46:11Z) - Trust but Verify: Assigning Prediction Credibility by Counterfactual
Constrained Learning [123.3472310767721]
Prediction credibility measures are fundamental in statistics and machine learning.
These measures should account for the wide variety of models used in practice.
The framework developed in this work expresses the credibility as a risk-fit trade-off.
arXiv Detail & Related papers (2020-11-24T19:52:38Z) - Learning Diverse Representations for Fast Adaptation to Distribution
Shift [78.83747601814669]
We present a method for learning multiple models, incorporating an objective that pressures each to learn a distinct way to solve the task.
We demonstrate our framework's ability to facilitate rapid adaptation to distribution shift.
arXiv Detail & Related papers (2020-06-12T12:23:50Z) - Binary Classification from Positive Data with Skewed Confidence [85.18941440826309]
Positive-confidence (Pconf) classification is a promising weakly-supervised learning method.
In practice, the confidence may be skewed by bias arising in an annotation process.
We introduce the parameterized model of the skewed confidence, and propose the method for selecting the hyper parameter.
arXiv Detail & Related papers (2020-01-29T00:04:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.