A Study on Mitigating Hard Boundaries of Decision-Tree-based Uncertainty
Estimates for AI Models
- URL: http://arxiv.org/abs/2201.03263v1
- Date: Mon, 10 Jan 2022 10:29:12 GMT
- Title: A Study on Mitigating Hard Boundaries of Decision-Tree-based Uncertainty
Estimates for AI Models
- Authors: Pascal Gerber, Lisa J\"ockel, Michael Kl\"as
- Abstract summary: Uncertainty wrappers use a decision tree approach to cluster input quality related uncertainties, assigning inputs strictly to distinct uncertainty clusters.
Our objective is to replace this with an approach that mitigates hard decision boundaries while preserving interpretability, runtime complexity, and prediction performance.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Outcomes of data-driven AI models cannot be assumed to be always correct. To
estimate the uncertainty in these outcomes, the uncertainty wrapper framework
has been proposed, which considers uncertainties related to model fit, input
quality, and scope compliance. Uncertainty wrappers use a decision tree
approach to cluster input quality related uncertainties, assigning inputs
strictly to distinct uncertainty clusters. Hence, a slight variation in only
one feature may lead to a cluster assignment with a significantly different
uncertainty. Our objective is to replace this with an approach that mitigates
hard decision boundaries of these assignments while preserving
interpretability, runtime complexity, and prediction performance. Five
approaches were selected as candidates and integrated into the uncertainty
wrapper framework. For the evaluation based on the Brier score, datasets for a
pedestrian detection use case were generated using the CARLA simulator and
YOLOv3. All integrated approaches achieved a softening, i.e., smoothing, of
uncertainty estimation. Yet, compared to decision trees, they are not so easy
to interpret and have higher runtime complexity. Moreover, some components of
the Brier score impaired while others improved. Most promising regarding the
Brier score were random forests. In conclusion, softening hard decision tree
boundaries appears to be a trade-off decision.
Related papers
- Uncertainty-boosted Robust Video Activity Anticipation [72.14155465769201]
Video activity anticipation aims to predict what will happen in the future, embracing a broad application prospect ranging from robot vision to autonomous driving.
Despite the recent progress, the data uncertainty issue, reflected as the content evolution process and dynamic correlation in event labels, has been somehow ignored.
We propose an uncertainty-boosted robust video activity anticipation framework, which generates uncertainty values to indicate the credibility of the anticipation results.
arXiv Detail & Related papers (2024-04-29T12:31:38Z) - Cost-Sensitive Uncertainty-Based Failure Recognition for Object Detection [1.8990839669542954]
We propose a cost-sensitive framework for object detection tailored to user-defined budgets.
We derive minimum thresholding requirements to prevent performance degradation.
We automate and optimize the thresholding process to maximize the failure recognition rate.
arXiv Detail & Related papers (2024-04-26T14:03:55Z) - Robust Design and Evaluation of Predictive Algorithms under Unobserved Confounding [2.8498944632323755]
We propose a unified framework for the robust design and evaluation of predictive algorithms in selectively observed data.
We impose general assumptions on how much the outcome may vary on average between unselected and selected units.
We develop debiased machine learning estimators for the bounds on a large class of predictive performance estimands.
arXiv Detail & Related papers (2022-12-19T20:41:44Z) - Rethinking Missing Data: Aleatoric Uncertainty-Aware Recommendation [59.500347564280204]
We propose a new Aleatoric Uncertainty-aware Recommendation (AUR) framework.
AUR consists of a new uncertainty estimator along with a normal recommender model.
As the chance of mislabeling reflects the potential of a pair, AUR makes recommendations according to the uncertainty.
arXiv Detail & Related papers (2022-09-22T04:32:51Z) - Dense Uncertainty Estimation via an Ensemble-based Conditional Latent
Variable Model [68.34559610536614]
We argue that the aleatoric uncertainty is an inherent attribute of the data and can only be correctly estimated with an unbiased oracle model.
We propose a new sampling and selection strategy at train time to approximate the oracle model for aleatoric uncertainty estimation.
Our results show that our solution achieves both accurate deterministic results and reliable uncertainty estimation.
arXiv Detail & Related papers (2021-11-22T08:54:10Z) - CertainNet: Sampling-free Uncertainty Estimation for Object Detection [65.28989536741658]
Estimating the uncertainty of a neural network plays a fundamental role in safety-critical settings.
In this work, we propose a novel sampling-free uncertainty estimation method for object detection.
We call it CertainNet, and it is the first to provide separate uncertainties for each output signal: objectness, class, location and size.
arXiv Detail & Related papers (2021-10-04T17:59:31Z) - Data-Driven Robust Optimization using Unsupervised Deep Learning [0.0]
We show that a trained neural network can be integrated into a robust optimization model by formulating the adversarial problem as a convex mixed-integer program.
We find that this approach outperforms a similar approach using kernel-based support vector sets.
arXiv Detail & Related papers (2020-11-19T11:06:54Z) - Revisiting One-vs-All Classifiers for Predictive Uncertainty and
Out-of-Distribution Detection in Neural Networks [22.34227625637843]
We investigate how the parametrization of the probabilities in discriminative classifiers affects the uncertainty estimates.
We show that one-vs-all formulations can improve calibration on image classification tasks.
arXiv Detail & Related papers (2020-07-10T01:55:02Z) - Efficient Ensemble Model Generation for Uncertainty Estimation with
Bayesian Approximation in Segmentation [74.06904875527556]
We propose a generic and efficient segmentation framework to construct ensemble segmentation models.
In the proposed method, ensemble models can be efficiently generated by using the layer selection method.
We also devise a new pixel-wise uncertainty loss, which improves the predictive performance.
arXiv Detail & Related papers (2020-05-21T16:08:38Z) - Confounding-Robust Policy Evaluation in Infinite-Horizon Reinforcement
Learning [70.01650994156797]
Off- evaluation of sequential decision policies from observational data is necessary in batch reinforcement learning such as education healthcare.
We develop an approach that estimates the bounds of a given policy.
We prove convergence to the sharp bounds as we collect more confounded data.
arXiv Detail & Related papers (2020-02-11T16:18:14Z) - Dirichlet uncertainty wrappers for actionable algorithm accuracy
accountability and auditability [0.5156484100374058]
We propose a wrapper that enriches its output prediction with a measure of uncertainty.
Based on the resulting uncertainty measure, we advocate for a rejection system that selects the more confident predictions.
Results demonstrate the effectiveness of the uncertainty computed by the wrapper.
arXiv Detail & Related papers (2019-12-29T11:05:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.