Dirichlet uncertainty wrappers for actionable algorithm accuracy
accountability and auditability
- URL: http://arxiv.org/abs/1912.12628v1
- Date: Sun, 29 Dec 2019 11:05:47 GMT
- Title: Dirichlet uncertainty wrappers for actionable algorithm accuracy
accountability and auditability
- Authors: Jos\'e Mena, Oriol Pujol, Jordi Vitri\`a
- Abstract summary: We propose a wrapper that enriches its output prediction with a measure of uncertainty.
Based on the resulting uncertainty measure, we advocate for a rejection system that selects the more confident predictions.
Results demonstrate the effectiveness of the uncertainty computed by the wrapper.
- Score: 0.5156484100374058
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Nowadays, the use of machine learning models is becoming a utility in many
applications. Companies deliver pre-trained models encapsulated as application
programming interfaces (APIs) that developers combine with third party
components and their own models and data to create complex data products to
solve specific problems. The complexity of such products and the lack of
control and knowledge of the internals of each component used cause unavoidable
effects, such as lack of transparency, difficulty in auditability, and
emergence of potential uncontrolled risks. They are effectively black-boxes.
Accountability of such solutions is a challenge for the auditors and the
machine learning community. In this work, we propose a wrapper that given a
black-box model enriches its output prediction with a measure of uncertainty.
By using this wrapper, we make the black-box auditable for the accuracy risk
(risk derived from low quality or uncertain decisions) and at the same time we
provide an actionable mechanism to mitigate that risk in the form of decision
rejection; we can choose not to issue a prediction when the risk or uncertainty
in that decision is significant. Based on the resulting uncertainty measure, we
advocate for a rejection system that selects the more confident predictions,
discarding those more uncertain, leading to an improvement in the trustability
of the resulting system. We showcase the proposed technique and methodology in
a practical scenario where a simulated sentiment analysis API based on natural
language processing is applied to different domains. Results demonstrate the
effectiveness of the uncertainty computed by the wrapper and its high
correlation to bad quality predictions and misclassifications.
Related papers
- Know Where You're Uncertain When Planning with Multimodal Foundation Models: A Formal Framework [54.40508478482667]
We present a comprehensive framework to disentangle, quantify, and mitigate uncertainty in perception and plan generation.
We propose methods tailored to the unique properties of perception and decision-making.
We show that our uncertainty disentanglement framework reduces variability by up to 40% and enhances task success rates by 5% compared to baselines.
arXiv Detail & Related papers (2024-11-03T17:32:00Z) - Explainability through uncertainty: Trustworthy decision-making with neural networks [1.104960878651584]
Uncertainty is a key feature of any machine learning model.
It is particularly important in neural networks, which tend to be overconfident.
Uncertainty as XAI improves the model's trustworthiness in downstream decision-making tasks.
arXiv Detail & Related papers (2024-03-15T10:22:48Z) - Decomposing Uncertainty for Large Language Models through Input Clarification Ensembling [69.83976050879318]
In large language models (LLMs), identifying sources of uncertainty is an important step toward improving reliability, trustworthiness, and interpretability.
In this paper, we introduce an uncertainty decomposition framework for LLMs, called input clarification ensembling.
Our approach generates a set of clarifications for the input, feeds them into an LLM, and ensembles the corresponding predictions.
arXiv Detail & Related papers (2023-11-15T05:58:35Z) - Quantifying Uncertainty in Deep Learning Classification with Noise in
Discrete Inputs for Risk-Based Decision Making [1.529943343419486]
We propose a mathematical framework to quantify prediction uncertainty for Deep Neural Network (DNN) models.
The prediction uncertainty arises from errors in predictors that follow some known finite discrete distribution.
Our proposed framework can support risk-based decision making in applications when discrete errors in predictors are present.
arXiv Detail & Related papers (2023-10-09T19:26:24Z) - Lightweight, Uncertainty-Aware Conformalized Visual Odometry [2.429910016019183]
Data-driven visual odometry (VO) is a critical subroutine for autonomous edge robotics.
Emerging edge robotics devices like insect-scale drones and surgical robots lack a computationally efficient framework to estimate VO's predictive uncertainties.
This paper presents a novel, lightweight, and statistically robust framework that leverages conformal inference (CI) to extract VO's uncertainty bands.
arXiv Detail & Related papers (2023-03-03T20:37:55Z) - Multi Agent System for Machine Learning Under Uncertainty in Cyber
Physical Manufacturing System [78.60415450507706]
Recent advancements in predictive machine learning has led to its application in various use cases in manufacturing.
Most research focused on maximising predictive accuracy without addressing the uncertainty associated with it.
In this paper, we determine the sources of uncertainty in machine learning and establish the success criteria of a machine learning system to function well under uncertainty.
arXiv Detail & Related papers (2021-07-28T10:28:05Z) - Ensemble Quantile Networks: Uncertainty-Aware Reinforcement Learning
with Applications in Autonomous Driving [1.6758573326215689]
Reinforcement learning can be used to create a decision-making agent for autonomous driving.
Previous approaches provide only black-box solutions, which do not offer information on how confident the agent is about its decisions.
This paper introduces the Ensemble Quantile Networks (EQN) method, which combines distributional RL with an ensemble approach to obtain a complete uncertainty estimate.
arXiv Detail & Related papers (2021-05-21T10:36:16Z) - Probabilistic robust linear quadratic regulators with Gaussian processes [73.0364959221845]
Probabilistic models such as Gaussian processes (GPs) are powerful tools to learn unknown dynamical systems from data for subsequent use in control design.
We present a novel controller synthesis for linearized GP dynamics that yields robust controllers with respect to a probabilistic stability margin.
arXiv Detail & Related papers (2021-05-17T08:36:18Z) - Uncertainty as a Form of Transparency: Measuring, Communicating, and
Using Uncertainty [66.17147341354577]
We argue for considering a complementary form of transparency by estimating and communicating the uncertainty associated with model predictions.
We describe how uncertainty can be used to mitigate model unfairness, augment decision-making, and build trustworthy systems.
This work constitutes an interdisciplinary review drawn from literature spanning machine learning, visualization/HCI, design, decision-making, and fairness.
arXiv Detail & Related papers (2020-11-15T17:26:14Z) - An Uncertainty-based Human-in-the-loop System for Industrial Tool Wear
Analysis [68.8204255655161]
We show that uncertainty measures based on Monte-Carlo dropout in the context of a human-in-the-loop system increase the system's transparency and performance.
A simulation study demonstrates that the uncertainty-based human-in-the-loop system increases performance for different levels of human involvement.
arXiv Detail & Related papers (2020-07-14T15:47:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.