Uncertainty Quantification for Competency Assessment of Autonomous
Agents
- URL: http://arxiv.org/abs/2206.10553v1
- Date: Tue, 21 Jun 2022 17:35:13 GMT
- Title: Uncertainty Quantification for Competency Assessment of Autonomous
Agents
- Authors: Aastha Acharya, Rebecca Russell, Nisar R. Ahmed
- Abstract summary: autonomous agents must elicit appropriate levels of trust from human users.
One method to build trust is to have agents assess and communicate their own competencies for performing given tasks.
We show how ensembles of deep generative models can be used to quantify the agent's aleatoric and epistemic uncertainties.
- Score: 3.3517146652431378
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: For safe and reliable deployment in the real world, autonomous agents must
elicit appropriate levels of trust from human users. One method to build trust
is to have agents assess and communicate their own competencies for performing
given tasks. Competency depends on the uncertainties affecting the agent,
making accurate uncertainty quantification vital for competency assessment. In
this work, we show how ensembles of deep generative models can be used to
quantify the agent's aleatoric and epistemic uncertainties when forecasting
task outcomes as part of competency assessment.
Related papers
- Know Where You're Uncertain When Planning with Multimodal Foundation Models: A Formal Framework [54.40508478482667]
We present a comprehensive framework to disentangle, quantify, and mitigate uncertainty in perception and plan generation.
We propose methods tailored to the unique properties of perception and decision-making.
We show that our uncertainty disentanglement framework reduces variability by up to 40% and enhances task success rates by 5% compared to baselines.
arXiv Detail & Related papers (2024-11-03T17:32:00Z) - Criticality and Safety Margins for Reinforcement Learning [53.10194953873209]
We seek to define a criticality framework with both a quantifiable ground truth and a clear significance to users.
We introduce true criticality as the expected drop in reward when an agent deviates from its policy for n consecutive random actions.
We also introduce the concept of proxy criticality, a low-overhead metric that has a statistically monotonic relationship to true criticality.
arXiv Detail & Related papers (2024-09-26T21:00:45Z) - "A Good Bot Always Knows Its Limitations": Assessing Autonomous System Decision-making Competencies through Factorized Machine Self-confidence [5.167803438665586]
Factorized Machine Self-confidence (FaMSeC) provides a holistic description of factors driving an algorithmic decision-making process.
indicators are derived from hierarchical problem-solving statistics' embedded within broad classes of probabilistic decision-making algorithms.
FaMSeC allows algorithmic goodness of fit' evaluations to be easily incorporated into the design of many kinds of autonomous agents.
arXiv Detail & Related papers (2024-07-29T01:22:04Z) - U-Trustworthy Models.Reliability, Competence, and Confidence in
Decision-Making [0.21756081703275998]
We present a precise mathematical definition of trustworthiness, termed $mathcalU$-trustworthiness.
Within the context of $mathcalU$-trustworthiness, we prove that properly-ranked models are inherently $mathcalU$-trustworthy.
We advocate for the adoption of the AUC metric as the preferred measure of trustworthiness.
arXiv Detail & Related papers (2024-01-04T04:58:02Z) - A Factor-Based Framework for Decision-Making Competency Self-Assessment [1.3670071336891754]
We develop a framework for generating succinct human-understandable competency self-assessments in terms of machine self confidence.
We combine several aspects of probabilistic meta reasoning for algorithmic planning and decision-making under uncertainty to arrive at a novel set of generalizable self-confidence factors.
arXiv Detail & Related papers (2022-03-22T18:19:10Z) - Bayesian autoencoders with uncertainty quantification: Towards
trustworthy anomaly detection [78.24964622317634]
In this work, the formulation of Bayesian autoencoders (BAEs) is adopted to quantify the total anomaly uncertainty.
To evaluate the quality of uncertainty, we consider the task of classifying anomalies with the additional option of rejecting predictions of high uncertainty.
Our experiments demonstrate the effectiveness of the BAE and total anomaly uncertainty on a set of benchmark datasets and two real datasets for manufacturing.
arXiv Detail & Related papers (2022-02-25T12:20:04Z) - MACEst: The reliable and trustworthy Model Agnostic Confidence Estimator [0.17188280334580192]
We argue that any confidence estimates based upon standard machine learning point prediction algorithms are fundamentally flawed.
We present MACEst, a Model Agnostic Confidence Estimator, which provides reliable and trustworthy confidence estimates.
arXiv Detail & Related papers (2021-09-02T14:34:06Z) - Learning Uncertainty For Safety-Oriented Semantic Segmentation In
Autonomous Driving [77.39239190539871]
We show how uncertainty estimation can be leveraged to enable safety critical image segmentation in autonomous driving.
We introduce a new uncertainty measure based on disagreeing predictions as measured by a dissimilarity function.
We show experimentally that our proposed approach is much less computationally intensive at inference time than competing methods.
arXiv Detail & Related papers (2021-05-28T09:23:05Z) - Ensemble Quantile Networks: Uncertainty-Aware Reinforcement Learning
with Applications in Autonomous Driving [1.6758573326215689]
Reinforcement learning can be used to create a decision-making agent for autonomous driving.
Previous approaches provide only black-box solutions, which do not offer information on how confident the agent is about its decisions.
This paper introduces the Ensemble Quantile Networks (EQN) method, which combines distributional RL with an ensemble approach to obtain a complete uncertainty estimate.
arXiv Detail & Related papers (2021-05-21T10:36:16Z) - An evaluation of word-level confidence estimation for end-to-end
automatic speech recognition [70.61280174637913]
We investigate confidence estimation for end-to-end automatic speech recognition (ASR)
We provide an extensive benchmark of popular confidence methods on four well-known speech datasets.
Our results suggest a strong baseline can be obtained by scaling the logits by a learnt temperature.
arXiv Detail & Related papers (2021-01-14T09:51:59Z) - Uncertainty as a Form of Transparency: Measuring, Communicating, and
Using Uncertainty [66.17147341354577]
We argue for considering a complementary form of transparency by estimating and communicating the uncertainty associated with model predictions.
We describe how uncertainty can be used to mitigate model unfairness, augment decision-making, and build trustworthy systems.
This work constitutes an interdisciplinary review drawn from literature spanning machine learning, visualization/HCI, design, decision-making, and fairness.
arXiv Detail & Related papers (2020-11-15T17:26:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.