Compensating for Sensing Failures via Delegation in Human-AI Hybrid
Systems
- URL: http://arxiv.org/abs/2303.01300v1
- Date: Thu, 2 Mar 2023 14:27:01 GMT
- Title: Compensating for Sensing Failures via Delegation in Human-AI Hybrid
Systems
- Authors: Andrew Fuchs, Andrea Passarella, Marco Conti
- Abstract summary: We consider the hybrid human-AI teaming case where a managing agent is tasked with identifying when to perform a delegation assignment.
We model how the environmental context can contribute to, or exacerbate, the sensing deficiencies.
We demonstrate how a Reinforcement Learning (RL) manager can correct the context-delegation association.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Given an increasing prevalence of intelligent systems capable of autonomous
actions or augmenting human activities, it is important to consider scenarios
in which the human, autonomous system, or both can exhibit failures as a result
of one of several contributing factors (e.g. perception). Failures for either
humans or autonomous agents can lead to simply a reduced performance level, or
a failure can lead to something as severe as injury or death. For our topic, we
consider the hybrid human-AI teaming case where a managing agent is tasked with
identifying when to perform a delegation assignment and whether the human or
autonomous system should gain control. In this context, the manager will
estimate its best action based on the likelihood of either (human, autonomous)
agent failure as a result of their sensing capabilities and possible
deficiencies. We model how the environmental context can contribute to, or
exacerbate, the sensing deficiencies. These contexts provide cases where the
manager must learn to attribute capabilities to suitability for
decision-making. As such, we demonstrate how a Reinforcement Learning (RL)
manager can correct the context-delegation association and assist the hybrid
team of agents in outperforming the behavior of any agent working in isolation.
Related papers
- Causal Responsibility Attribution for Human-AI Collaboration [62.474732677086855]
This paper presents a causal framework using Structural Causal Models (SCMs) to systematically attribute responsibility in human-AI systems.
Two case studies illustrate the framework's adaptability in diverse human-AI collaboration scenarios.
arXiv Detail & Related papers (2024-11-05T17:17:45Z) - Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - Optimizing Risk-averse Human-AI Hybrid Teams [1.433758865948252]
We propose a manager which learns, through a standard Reinforcement Learning scheme, how to best delegate.
We demonstrate the optimality of our manager's performance in several grid environments.
Our results show our manager can successfully learn desirable delegations which result in team paths near/exactly optimal.
arXiv Detail & Related papers (2024-03-13T09:49:26Z) - Enabling High-Level Machine Reasoning with Cognitive Neuro-Symbolic
Systems [67.01132165581667]
We propose to enable high-level reasoning in AI systems by integrating cognitive architectures with external neuro-symbolic components.
We illustrate a hybrid framework centered on ACT-R and we discuss the role of generative models in recent and future applications.
arXiv Detail & Related papers (2023-11-13T21:20:17Z) - Optimizing delegation between human and AI collaborative agents [1.6114012813668932]
We train a delegating manager agent to make delegation decisions with respect to potential performance deficiencies.
Our framework learns through observations of team performance without restricting agents to matching dynamics.
Our results show our manager learns to perform delegation decisions with teams of agents operating under differing representations of the environment.
arXiv Detail & Related papers (2023-09-26T07:23:26Z) - Human-AI Collaboration: The Effect of AI Delegation on Human Task
Performance and Task Satisfaction [0.0]
We show that task performance and task satisfaction improve through AI delegation.
We identify humans' increased levels of self-efficacy as the underlying mechanism for these improvements.
Our findings provide initial evidence that allowing AI models to take over more management responsibilities can be an effective form of human-AI collaboration.
arXiv Detail & Related papers (2023-03-16T11:02:46Z) - A Cognitive Framework for Delegation Between Error-Prone AI and Human
Agents [0.0]
We investigate the use of cognitively inspired models of behavior to predict the behavior of both human and AI agents.
The predicted behavior is used to delegate control between humans and AI agents through the use of an intermediary entity.
arXiv Detail & Related papers (2022-04-06T15:15:21Z) - Balancing Performance and Human Autonomy with Implicit Guidance Agent [8.071506311915396]
We show that implicit guidance is effective for enabling humans to maintain a balance between improving their plans and retaining autonomy.
We modeled a collaborative agent with implicit guidance by integrating the Bayesian Theory of Mind into existing collaborative-planning algorithms.
arXiv Detail & Related papers (2021-09-01T14:47:29Z) - Learning Latent Representations to Influence Multi-Agent Interaction [65.44092264843538]
We propose a reinforcement learning-based framework for learning latent representations of an agent's policy.
We show that our approach outperforms the alternatives and learns to influence the other agent.
arXiv Detail & Related papers (2020-11-12T19:04:26Z) - Watch-And-Help: A Challenge for Social Perception and Human-AI
Collaboration [116.28433607265573]
We introduce Watch-And-Help (WAH), a challenge for testing social intelligence in AI agents.
In WAH, an AI agent needs to help a human-like agent perform a complex household task efficiently.
We build VirtualHome-Social, a multi-agent household environment, and provide a benchmark including both planning and learning based baselines.
arXiv Detail & Related papers (2020-10-19T21:48:31Z) - Effect of Confidence and Explanation on Accuracy and Trust Calibration
in AI-Assisted Decision Making [53.62514158534574]
We study whether features that reveal case-specific model information can calibrate trust and improve the joint performance of the human and AI.
We show that confidence score can help calibrate people's trust in an AI model, but trust calibration alone is not sufficient to improve AI-assisted decision making.
arXiv Detail & Related papers (2020-01-07T15:33:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.