A Factor-Based Framework for Decision-Making Competency Self-Assessment
- URL: http://arxiv.org/abs/2203.11981v1
- Date: Tue, 22 Mar 2022 18:19:10 GMT
- Title: A Factor-Based Framework for Decision-Making Competency Self-Assessment
- Authors: Brett W. Israelsen, Nisar Ahmed
- Abstract summary: We develop a framework for generating succinct human-understandable competency self-assessments in terms of machine self confidence.
We combine several aspects of probabilistic meta reasoning for algorithmic planning and decision-making under uncertainty to arrive at a novel set of generalizable self-confidence factors.
- Score: 1.3670071336891754
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We summarize our efforts to date in developing a framework for generating
succinct human-understandable competency self-assessments in terms of machine
self confidence, i.e. a robot's self-trust in its functional abilities to
accomplish assigned tasks. Whereas early work explored machine self-confidence
in ad hoc ways for niche applications, our Factorized Machine Self-Confidence
framework introduces and combines several aspects of probabilistic meta
reasoning for algorithmic planning and decision-making under uncertainty to
arrive at a novel set of generalizable self-confidence factors, which can
support competency assessment for a wide variety of problems.
Related papers
- The Impact and Feasibility of Self-Confidence Shaping for AI-Assisted Decision-Making [6.852960508141108]
This paper presents an intervention for self-confidence shaping, designed to calibrate self-confidence at a targeted level.
We show that self-confidence shaping can improve human-AI team performance by nearly 50% by mitigating both over- and under-reliance on AI.
The observed relationship between sentiment and self-confidence suggests that modifying sentiment could be a viable strategy for shaping self-confidence.
arXiv Detail & Related papers (2025-02-20T06:55:41Z) - Autotelic Reinforcement Learning: Exploring Intrinsic Motivations for Skill Acquisition in Open-Ended Environments [1.104960878651584]
This paper presents a comprehensive overview of autotelic Reinforcement Learning (RL), emphasizing the role of intrinsic motivations in the open-ended formation of skill repertoires.
We delineate the distinctions between knowledge-based and competence-based intrinsic motivations, illustrating how these concepts inform the development of autonomous agents capable of generating and pursuing self-defined goals.
arXiv Detail & Related papers (2025-02-06T14:37:46Z) - Trustworthy and Explainable Decision-Making for Workforce allocation [5.329471304736775]
This paper presents an ongoing project focused on developing a decision-making tool designed for workforce allocation.
Our objective is to create a system that not only optimises the allocation of teams to scheduled tasks but also provides clear, understandable explanations for its decisions.
By incorporating human-in-the-loop mechanisms, the tool aims to enhance user trust and facilitate interactive conflict resolution.
arXiv Detail & Related papers (2024-12-13T16:46:13Z) - Know Where You're Uncertain When Planning with Multimodal Foundation Models: A Formal Framework [54.40508478482667]
We present a comprehensive framework to disentangle, quantify, and mitigate uncertainty in perception and plan generation.
We propose methods tailored to the unique properties of perception and decision-making.
We show that our uncertainty disentanglement framework reduces variability by up to 40% and enhances task success rates by 5% compared to baselines.
arXiv Detail & Related papers (2024-11-03T17:32:00Z) - "A Good Bot Always Knows Its Limitations": Assessing Autonomous System Decision-making Competencies through Factorized Machine Self-confidence [5.167803438665586]
Factorized Machine Self-confidence (FaMSeC) provides a holistic description of factors driving an algorithmic decision-making process.
indicators are derived from hierarchical problem-solving statistics' embedded within broad classes of probabilistic decision-making algorithms.
FaMSeC allows algorithmic goodness of fit' evaluations to be easily incorporated into the design of many kinds of autonomous agents.
arXiv Detail & Related papers (2024-07-29T01:22:04Z) - Rational Decision-Making Agent with Internalized Utility Judgment [91.80700126895927]
Large language models (LLMs) have demonstrated remarkable advancements and have attracted significant efforts to develop LLMs into agents capable of executing intricate multi-step decision-making tasks beyond traditional NLP applications.
This paper proposes RadAgent, which fosters the development of its rationality through an iterative framework involving Experience Exploration and Utility Learning.
Experimental results on the ToolBench dataset demonstrate RadAgent's superiority over baselines, achieving over 10% improvement in Pass Rate on diverse tasks.
arXiv Detail & Related papers (2023-08-24T03:11:45Z) - Uncertainty Quantification for Competency Assessment of Autonomous
Agents [3.3517146652431378]
autonomous agents must elicit appropriate levels of trust from human users.
One method to build trust is to have agents assess and communicate their own competencies for performing given tasks.
We show how ensembles of deep generative models can be used to quantify the agent's aleatoric and epistemic uncertainties.
arXiv Detail & Related papers (2022-06-21T17:35:13Z) - Autonomous Open-Ended Learning of Tasks with Non-Stationary
Interdependencies [64.0476282000118]
Intrinsic motivations have proven to generate a task-agnostic signal to properly allocate the training time amongst goals.
While the majority of works in the field of intrinsically motivated open-ended learning focus on scenarios where goals are independent from each other, only few of them studied the autonomous acquisition of interdependent tasks.
In particular, we first deepen the analysis of a previous system, showing the importance of incorporating information about the relationships between tasks at a higher level of the architecture.
Then we introduce H-GRAIL, a new system that extends the previous one by adding a new learning layer to store the autonomously acquired sequences
arXiv Detail & Related papers (2022-05-16T10:43:01Z) - Designing for Responsible Trust in AI Systems: A Communication
Perspective [56.80107647520364]
We draw from communication theories and literature on trust in technologies to develop a conceptual model called MATCH.
We highlight transparency and interaction as AI systems' affordances that present a wide range of trustworthiness cues to users.
We propose a checklist of requirements to help technology creators identify appropriate cues to use.
arXiv Detail & Related papers (2022-04-29T00:14:33Z) - Multi Agent System for Machine Learning Under Uncertainty in Cyber
Physical Manufacturing System [78.60415450507706]
Recent advancements in predictive machine learning has led to its application in various use cases in manufacturing.
Most research focused on maximising predictive accuracy without addressing the uncertainty associated with it.
In this paper, we determine the sources of uncertainty in machine learning and establish the success criteria of a machine learning system to function well under uncertainty.
arXiv Detail & Related papers (2021-07-28T10:28:05Z) - Randomized Entity-wise Factorization for Multi-Agent Reinforcement
Learning [59.62721526353915]
Multi-agent settings in the real world often involve tasks with varying types and quantities of agents and non-agent entities.
Our method aims to leverage these commonalities by asking the question: What is the expected utility of each agent when only considering a randomly selected sub-group of its observed entities?''
arXiv Detail & Related papers (2020-06-07T18:28:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.