Reliability-Aware Prediction via Uncertainty Learning for Person Image
Retrieval
- URL: http://arxiv.org/abs/2210.13440v1
- Date: Mon, 24 Oct 2022 17:53:20 GMT
- Title: Reliability-Aware Prediction via Uncertainty Learning for Person Image
Retrieval
- Authors: Zhaopeng Dou, Zhongdao Wang, Weihua Chen, Yali Li, and Shengjin Wang
- Abstract summary: UAL aims at providing reliability-aware predictions by considering data uncertainty and model uncertainty simultaneously.
Data uncertainty captures the noise" inherent in the sample, while model uncertainty depicts the model's confidence in the sample's prediction.
- Score: 51.83967175585896
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Current person image retrieval methods have achieved great improvements in
accuracy metrics. However, they rarely describe the reliability of the
prediction. In this paper, we propose an Uncertainty-Aware Learning (UAL)
method to remedy this issue. UAL aims at providing reliability-aware
predictions by considering data uncertainty and model uncertainty
simultaneously. Data uncertainty captures the ``noise" inherent in the sample,
while model uncertainty depicts the model's confidence in the sample's
prediction. Specifically, in UAL, (1) we propose a sampling-free data
uncertainty learning method to adaptively assign weights to different samples
during training, down-weighting the low-quality ambiguous samples. (2) we
leverage the Bayesian framework to model the model uncertainty by assuming the
parameters of the network follow a Bernoulli distribution. (3) the data
uncertainty and the model uncertainty are jointly learned in a unified network,
and they serve as two fundamental criteria for the reliability assessment: if a
probe is high-quality (low data uncertainty) and the model is confident in the
prediction of the probe (low model uncertainty), the final ranking will be
assessed as reliable. Experiments under the risk-controlled settings and the
multi-query settings show the proposed reliability assessment is effective. Our
method also shows superior performance on three challenging benchmarks under
the vanilla single query settings.
Related papers
- Error-Driven Uncertainty Aware Training [7.702016079410588]
Error-Driven Uncertainty Aware Training aims to enhance the ability of neural classifiers to estimate their uncertainty correctly.
The EUAT approach operates during the model's training phase by selectively employing two loss functions depending on whether the training examples are correctly or incorrectly predicted.
We evaluate EUAT using diverse neural models and datasets in the image recognition domains considering both non-adversarial and adversarial settings.
arXiv Detail & Related papers (2024-05-02T11:48:14Z) - Toward Reliable Human Pose Forecasting with Uncertainty [51.628234388046195]
We develop an open-source library for human pose forecasting, including multiple models, supporting several datasets.
We devise two types of uncertainty in the problem to increase performance and convey better trust.
arXiv Detail & Related papers (2023-04-13T17:56:08Z) - ALUM: Adversarial Data Uncertainty Modeling from Latent Model
Uncertainty Compensation [25.67258563807856]
We propose a novel method called ALUM to handle the model uncertainty and data uncertainty in a unified scheme.
Our proposed ALUM is model-agnostic which can be easily implemented into any existing deep model with little extra overhead.
arXiv Detail & Related papers (2023-03-29T17:24:12Z) - The Implicit Delta Method [61.36121543728134]
In this paper, we propose an alternative, the implicit delta method, which works by infinitesimally regularizing the training loss of uncertainty.
We show that the change in the evaluation due to regularization is consistent for the variance of the evaluation estimator, even when the infinitesimal change is approximated by a finite difference.
arXiv Detail & Related papers (2022-11-11T19:34:17Z) - Learning Confidence for Transformer-based Neural Machine Translation [38.679505127679846]
We propose an unsupervised confidence estimate learning jointly with the training of the neural machine translation (NMT) model.
We explain confidence as how many hints the NMT model needs to make a correct prediction, and more hints indicate low confidence.
We demonstrate that our learned confidence estimate achieves high accuracy on extensive sentence/word-level quality estimation tasks.
arXiv Detail & Related papers (2022-03-22T01:51:58Z) - Dense Uncertainty Estimation via an Ensemble-based Conditional Latent
Variable Model [68.34559610536614]
We argue that the aleatoric uncertainty is an inherent attribute of the data and can only be correctly estimated with an unbiased oracle model.
We propose a new sampling and selection strategy at train time to approximate the oracle model for aleatoric uncertainty estimation.
Our results show that our solution achieves both accurate deterministic results and reliable uncertainty estimation.
arXiv Detail & Related papers (2021-11-22T08:54:10Z) - Dense Uncertainty Estimation [62.23555922631451]
In this paper, we investigate neural networks and uncertainty estimation techniques to achieve both accurate deterministic prediction and reliable uncertainty estimation.
We work on two types of uncertainty estimations solutions, namely ensemble based methods and generative model based methods, and explain their pros and cons while using them in fully/semi/weakly-supervised framework.
arXiv Detail & Related papers (2021-10-13T01:23:48Z) - Approaching Neural Network Uncertainty Realism [53.308409014122816]
Quantifying or at least upper-bounding uncertainties is vital for safety-critical systems such as autonomous vehicles.
We evaluate uncertainty realism -- a strict quality criterion -- with a Mahalanobis distance-based statistical test.
We adopt it to the automotive domain and show that it significantly improves uncertainty realism compared to a plain encoder-decoder model.
arXiv Detail & Related papers (2021-01-08T11:56:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.