Stochastic-YOLO: Efficient Probabilistic Object Detection under Dataset
Shifts
- URL: http://arxiv.org/abs/2009.02967v2
- Date: Fri, 6 Nov 2020 12:57:02 GMT
- Title: Stochastic-YOLO: Efficient Probabilistic Object Detection under Dataset
Shifts
- Authors: Tiago Azevedo, Ren\'e de Jong, Matthew Mattina, Partha Maji
- Abstract summary: In this paper, we adapt the well-established YOLOv3 architecture to generate uncertainty estimations.
We provide some sensitivity analyses, while arguing that MC-YOLO is a sound approach.
- Score: 8.656808451823172
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In image classification tasks, the evaluation of models' robustness to
increased dataset shifts with a probabilistic framework is very well studied.
However, object detection (OD) tasks pose other challenges for uncertainty
estimation and evaluation. For example, one needs to evaluate both the quality
of the label uncertainty (i.e., what?) and spatial uncertainty (i.e., where?)
for a given bounding box, but that evaluation cannot be performed with more
traditional average precision metrics (e.g., mAP). In this paper, we adapt the
well-established YOLOv3 architecture to generate uncertainty estimations by
introducing stochasticity in the form of Monte Carlo Dropout (MC-Drop), and
evaluate it across different levels of dataset shift. We call this novel
architecture Stochastic-YOLO, and provide an efficient implementation to
effectively reduce the burden of the MC-Drop sampling mechanism at inference
time. Finally, we provide some sensitivity analyses, while arguing that
Stochastic-YOLO is a sound approach that improves different components of
uncertainty estimations, in particular spatial uncertainties.
Related papers
- Uncertainty-boosted Robust Video Activity Anticipation [72.14155465769201]
Video activity anticipation aims to predict what will happen in the future, embracing a broad application prospect ranging from robot vision to autonomous driving.
Despite the recent progress, the data uncertainty issue, reflected as the content evolution process and dynamic correlation in event labels, has been somehow ignored.
We propose an uncertainty-boosted robust video activity anticipation framework, which generates uncertainty values to indicate the credibility of the anticipation results.
arXiv Detail & Related papers (2024-04-29T12:31:38Z) - Adaptive Uncertainty Estimation via High-Dimensional Testing on Latent
Representations [28.875819909902244]
Uncertainty estimation aims to evaluate the confidence of a trained deep neural network.
Existing uncertainty estimation approaches rely on low-dimensional distributional assumptions.
We propose a new framework using data-adaptive high-dimensional hypothesis testing for uncertainty estimation.
arXiv Detail & Related papers (2023-10-25T12:22:18Z) - Score Matching-based Pseudolikelihood Estimation of Neural Marked
Spatio-Temporal Point Process with Uncertainty Quantification [59.81904428056924]
We introduce SMASH: a Score MAtching estimator for learning markedPs with uncertainty quantification.
Specifically, our framework adopts a normalization-free objective by estimating the pseudolikelihood of markedPs through score-matching.
The superior performance of our proposed framework is demonstrated through extensive experiments in both event prediction and uncertainty quantification.
arXiv Detail & Related papers (2023-10-25T02:37:51Z) - Lightweight, Uncertainty-Aware Conformalized Visual Odometry [2.429910016019183]
Data-driven visual odometry (VO) is a critical subroutine for autonomous edge robotics.
Emerging edge robotics devices like insect-scale drones and surgical robots lack a computationally efficient framework to estimate VO's predictive uncertainties.
This paper presents a novel, lightweight, and statistically robust framework that leverages conformal inference (CI) to extract VO's uncertainty bands.
arXiv Detail & Related papers (2023-03-03T20:37:55Z) - The Implicit Delta Method [61.36121543728134]
In this paper, we propose an alternative, the implicit delta method, which works by infinitesimally regularizing the training loss of uncertainty.
We show that the change in the evaluation due to regularization is consistent for the variance of the evaluation estimator, even when the infinitesimal change is approximated by a finite difference.
arXiv Detail & Related papers (2022-11-11T19:34:17Z) - Uncertainty-Driven Action Quality Assessment [67.20617610820857]
We propose a novel probabilistic model, named Uncertainty-Driven AQA (UD-AQA), to capture the diversity among multiple judge scores.
We generate the estimation of uncertainty for each prediction, which is employed to re-weight AQA regression loss.
Our proposed method achieves competitive results on three benchmarks including the Olympic events MTL-AQA and FineDiving, and the surgical skill JIGSAWS datasets.
arXiv Detail & Related papers (2022-07-29T07:21:15Z) - Leveraging Unlabeled Data to Predict Out-of-Distribution Performance [63.740181251997306]
Real-world machine learning deployments are characterized by mismatches between the source (training) and target (test) distributions.
In this work, we investigate methods for predicting the target domain accuracy using only labeled source data and unlabeled target data.
We propose Average Thresholded Confidence (ATC), a practical method that learns a threshold on the model's confidence, predicting accuracy as the fraction of unlabeled examples.
arXiv Detail & Related papers (2022-01-11T23:01:12Z) - An Uncertainty Estimation Framework for Probabilistic Object Detection [5.83620245905973]
We introduce a new technique that combines two popular methods to estimate uncertainty in object detection.
Our framework employs deep ensembles and Monte Carlo dropout for approximating predictive uncertainty.
arXiv Detail & Related papers (2021-06-28T22:29:59Z) - The Aleatoric Uncertainty Estimation Using a Separate Formulation with
Virtual Residuals [51.71066839337174]
Existing methods can quantify the error in the target estimation, but they tend to underestimate it.
We propose a new separable formulation for the estimation of a signal and of its uncertainty, avoiding the effect of overfitting.
We demonstrate that the proposed method outperforms a state-of-the-art technique for signal and uncertainty estimation.
arXiv Detail & Related papers (2020-11-03T12:11:27Z) - Uncertainty-Gated Stochastic Sequential Model for EHR Mortality
Prediction [6.170898159041278]
We present a novel variational recurrent network that estimates the distribution of missing variables, updates hidden states, and predicts the possibility of in-hospital mortality.
It is noteworthy that our model can conduct these procedures in a single stream and learn all network parameters jointly in an end-to-end manner.
arXiv Detail & Related papers (2020-03-02T04:41:28Z) - Uncertainty-Based Out-of-Distribution Classification in Deep
Reinforcement Learning [17.10036674236381]
Wrong predictions for out-of-distribution data can cause safety critical situations in machine learning systems.
We propose a framework for uncertainty-based OOD classification: UBOOD.
We show that UBOOD produces reliable classification results when combined with ensemble-based estimators.
arXiv Detail & Related papers (2019-12-31T09:52:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.