Compensating for Sensing Failures via Delegation in Human-AI Hybrid
Systems
- URL: http://arxiv.org/abs/2303.01300v1
- Date: Thu, 2 Mar 2023 14:27:01 GMT
- Title: Compensating for Sensing Failures via Delegation in Human-AI Hybrid
Systems
- Authors: Andrew Fuchs, Andrea Passarella, Marco Conti
- Abstract summary: We consider the hybrid human-AI teaming case where a managing agent is tasked with identifying when to perform a delegation assignment.
We model how the environmental context can contribute to, or exacerbate, the sensing deficiencies.
We demonstrate how a Reinforcement Learning (RL) manager can correct the context-delegation association.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Given an increasing prevalence of intelligent systems capable of autonomous
actions or augmenting human activities, it is important to consider scenarios
in which the human, autonomous system, or both can exhibit failures as a result
of one of several contributing factors (e.g. perception). Failures for either
humans or autonomous agents can lead to simply a reduced performance level, or
a failure can lead to something as severe as injury or death. For our topic, we
consider the hybrid human-AI teaming case where a managing agent is tasked with
identifying when to perform a delegation assignment and whether the human or
autonomous system should gain control. In this context, the manager will
estimate its best action based on the likelihood of either (human, autonomous)
agent failure as a result of their sensing capabilities and possible
deficiencies. We model how the environmental context can contribute to, or
exacerbate, the sensing deficiencies. These contexts provide cases where the
manager must learn to attribute capabilities to suitability for
decision-making. As such, we demonstrate how a Reinforcement Learning (RL)
manager can correct the context-delegation association and assist the hybrid
team of agents in outperforming the behavior of any agent working in isolation.
Related papers
- MEReQ: Max-Ent Residual-Q Inverse RL for Sample-Efficient Alignment from Intervention [81.56607128684723]
We introduce MEReQ (Maximum-Entropy Residual-Q Inverse Reinforcement Learning), designed for sample-efficient alignment from human intervention.
MereQ infers a residual reward function that captures the discrepancy between the human expert's and the prior policy's underlying reward functions.
It then employs Residual Q-Learning (RQL) to align the policy with human preferences using this residual reward function.
arXiv Detail & Related papers (2024-06-24T01:51:09Z) - Bias Mitigation via Compensation: A Reinforcement Learning Perspective [1.5442389863546546]
Group dynamics might require that one agent (e.g., the AI system) compensate for biases and errors in another agent (e.g., the human)
We provide a theoretical framework for algorithmic compensation that synthesizes game theory and reinforcement learning principles.
This work then underpins our ethical analysis of the conditions in which AI agents should adapt to biases and behaviors of other agents.
arXiv Detail & Related papers (2024-04-30T04:41:47Z) - Optimizing Risk-averse Human-AI Hybrid Teams [1.433758865948252]
We propose a manager which learns, through a standard Reinforcement Learning scheme, how to best delegate.
We demonstrate the optimality of our manager's performance in several grid environments.
Our results show our manager can successfully learn desirable delegations which result in team paths near/exactly optimal.
arXiv Detail & Related papers (2024-03-13T09:49:26Z) - Enabling High-Level Machine Reasoning with Cognitive Neuro-Symbolic
Systems [67.01132165581667]
We propose to enable high-level reasoning in AI systems by integrating cognitive architectures with external neuro-symbolic components.
We illustrate a hybrid framework centered on ACT-R and we discuss the role of generative models in recent and future applications.
arXiv Detail & Related papers (2023-11-13T21:20:17Z) - Optimizing delegation between human and AI collaborative agents [1.6114012813668932]
We train a delegating manager agent to make delegation decisions with respect to potential performance deficiencies.
Our framework learns through observations of team performance without restricting agents to matching dynamics.
Our results show our manager learns to perform delegation decisions with teams of agents operating under differing representations of the environment.
arXiv Detail & Related papers (2023-09-26T07:23:26Z) - A Cognitive Framework for Delegation Between Error-Prone AI and Human
Agents [0.0]
We investigate the use of cognitively inspired models of behavior to predict the behavior of both human and AI agents.
The predicted behavior is used to delegate control between humans and AI agents through the use of an intermediary entity.
arXiv Detail & Related papers (2022-04-06T15:15:21Z) - Balancing Performance and Human Autonomy with Implicit Guidance Agent [8.071506311915396]
We show that implicit guidance is effective for enabling humans to maintain a balance between improving their plans and retaining autonomy.
We modeled a collaborative agent with implicit guidance by integrating the Bayesian Theory of Mind into existing collaborative-planning algorithms.
arXiv Detail & Related papers (2021-09-01T14:47:29Z) - Persistent Reinforcement Learning via Subgoal Curricula [114.83989499740193]
Value-accelerated Persistent Reinforcement Learning (VaPRL) generates a curriculum of initial states.
VaPRL reduces the interventions required by three orders of magnitude compared to episodic reinforcement learning.
arXiv Detail & Related papers (2021-07-27T16:39:45Z) - Learning Latent Representations to Influence Multi-Agent Interaction [65.44092264843538]
We propose a reinforcement learning-based framework for learning latent representations of an agent's policy.
We show that our approach outperforms the alternatives and learns to influence the other agent.
arXiv Detail & Related papers (2020-11-12T19:04:26Z) - Watch-And-Help: A Challenge for Social Perception and Human-AI
Collaboration [116.28433607265573]
We introduce Watch-And-Help (WAH), a challenge for testing social intelligence in AI agents.
In WAH, an AI agent needs to help a human-like agent perform a complex household task efficiently.
We build VirtualHome-Social, a multi-agent household environment, and provide a benchmark including both planning and learning based baselines.
arXiv Detail & Related papers (2020-10-19T21:48:31Z) - Effect of Confidence and Explanation on Accuracy and Trust Calibration
in AI-Assisted Decision Making [53.62514158534574]
We study whether features that reveal case-specific model information can calibrate trust and improve the joint performance of the human and AI.
We show that confidence score can help calibrate people's trust in an AI model, but trust calibration alone is not sufficient to improve AI-assisted decision making.
arXiv Detail & Related papers (2020-01-07T15:33:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.