Bayesian Triplet Loss: Uncertainty Quantification in Image Retrieval
- URL: http://arxiv.org/abs/2011.12663v3
- Date: Fri, 17 Sep 2021 17:26:20 GMT
- Title: Bayesian Triplet Loss: Uncertainty Quantification in Image Retrieval
- Authors: Frederik Warburg, Martin J{\o}rgensen, Javier Civera, S{\o}ren Hauberg
- Abstract summary: Uncertainty quantification in image retrieval is crucial for downstream decisions.
We present a new method that views image embeddings as features rather than deterministic features.
We derive a variational approximation of the posterior, called the Bayesian triplet loss, that produces state-of-the-art uncertainty estimates.
- Score: 10.743633102172236
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Uncertainty quantification in image retrieval is crucial for downstream
decisions, yet it remains a challenging and largely unexplored problem. Current
methods for estimating uncertainties are poorly calibrated, computationally
expensive, or based on heuristics. We present a new method that views image
embeddings as stochastic features rather than deterministic features. Our two
main contributions are (1) a likelihood that matches the triplet constraint and
that evaluates the probability of an anchor being closer to a positive than a
negative; and (2) a prior over the feature space that justifies the
conventional l2 normalization. To ensure computational efficiency, we derive a
variational approximation of the posterior, called the Bayesian triplet loss,
that produces state-of-the-art uncertainty estimates and matches the predictive
performance of current state-of-the-art methods.
Related papers
- Model-Agnostic Covariate-Assisted Inference on Partially Identified Causal Effects [1.9253333342733674]
Many causal estimands are only partially identifiable since they depend on the unobservable joint distribution between potential outcomes.
We propose a unified and model-agnostic inferential approach for a wide class of partially identified estimands.
arXiv Detail & Related papers (2023-10-12T08:17:30Z) - Understanding Uncertainty Sampling [7.32527270949303]
Uncertainty sampling is a prevalent active learning algorithm that queries sequentially the annotations of data samples.
We propose a notion of equivalent loss which depends on the used uncertainty measure and the original loss function.
We provide the first generalization bound for uncertainty sampling algorithms under both stream-based and pool-based settings.
arXiv Detail & Related papers (2023-07-06T01:57:37Z) - Principal Uncertainty Quantification with Spatial Correlation for Image
Restoration Problems [35.46703074728443]
PUQ -- Principal Uncertainty Quantification -- is a novel definition and corresponding analysis of uncertainty regions.
We derive uncertainty intervals around principal components of the empirical posterior distribution, forming an ambiguity region.
Our approach is verified through experiments on image colorization, super-resolution, and inpainting.
arXiv Detail & Related papers (2023-05-17T11:08:13Z) - Model-Based Uncertainty in Value Functions [89.31922008981735]
We focus on characterizing the variance over values induced by a distribution over MDPs.
Previous work upper bounds the posterior variance over values by solving a so-called uncertainty Bellman equation.
We propose a new uncertainty Bellman equation whose solution converges to the true posterior variance over values.
arXiv Detail & Related papers (2023-02-24T09:18:27Z) - Composed Image Retrieval with Text Feedback via Multi-grained
Uncertainty Regularization [73.04187954213471]
We introduce a unified learning approach to simultaneously modeling the coarse- and fine-grained retrieval.
The proposed method has achieved +4.03%, +3.38%, and +2.40% Recall@50 accuracy over a strong baseline.
arXiv Detail & Related papers (2022-11-14T14:25:40Z) - The Implicit Delta Method [61.36121543728134]
In this paper, we propose an alternative, the implicit delta method, which works by infinitesimally regularizing the training loss of uncertainty.
We show that the change in the evaluation due to regularization is consistent for the variance of the evaluation estimator, even when the infinitesimal change is approximated by a finite difference.
arXiv Detail & Related papers (2022-11-11T19:34:17Z) - Monotonicity and Double Descent in Uncertainty Estimation with Gaussian
Processes [52.92110730286403]
It is commonly believed that the marginal likelihood should be reminiscent of cross-validation metrics and that both should deteriorate with larger input dimensions.
We prove that by tuning hyper parameters, the performance, as measured by the marginal likelihood, improves monotonically with the input dimension.
We also prove that cross-validation metrics exhibit qualitatively different behavior that is characteristic of double descent.
arXiv Detail & Related papers (2022-10-14T08:09:33Z) - Learning Probabilistic Ordinal Embeddings for Uncertainty-Aware
Regression [91.3373131262391]
Uncertainty is the only certainty there is.
Traditionally, the direct regression formulation is considered and the uncertainty is modeled by modifying the output space to a certain family of probabilistic distributions.
How to model the uncertainty within the present-day technologies for regression remains an open issue.
arXiv Detail & Related papers (2021-03-25T06:56:09Z) - Uncertainty-Aware Few-Shot Image Classification [118.72423376789062]
Few-shot image classification learns to recognize new categories from limited labelled data.
We propose Uncertainty-Aware Few-Shot framework for image classification.
arXiv Detail & Related papers (2020-10-09T12:26:27Z) - Towards Better Performance and More Explainable Uncertainty for 3D
Object Detection of Autonomous Vehicles [33.0319422469465]
We propose a novel form of the loss function to increase the performance of LiDAR-based 3d object detection.
With the new loss function, the performance of our method on the val split of KITTI dataset shows up to a 15% increase in terms of Average Precision.
arXiv Detail & Related papers (2020-06-22T05:49:58Z) - A deep-learning based Bayesian approach to seismic imaging and
uncertainty quantification [0.4588028371034407]
Uncertainty is essential when dealing with ill-conditioned inverse problems.
It is often not possible to formulate a prior distribution that precisely encodes our prior knowledge about the unknown.
We propose to use the functional form of a randomly convolutional neural network as an implicit structured prior.
arXiv Detail & Related papers (2020-01-13T23:46:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.