Compensating for Sensing Failures via Delegation in Human-AI Hybrid
Systems
- URL: http://arxiv.org/abs/2303.01300v1
- Date: Thu, 2 Mar 2023 14:27:01 GMT
- Title: Compensating for Sensing Failures via Delegation in Human-AI Hybrid
Systems
- Authors: Andrew Fuchs, Andrea Passarella, Marco Conti
- Abstract summary: We consider the hybrid human-AI teaming case where a managing agent is tasked with identifying when to perform a delegation assignment.
We model how the environmental context can contribute to, or exacerbate, the sensing deficiencies.
We demonstrate how a Reinforcement Learning (RL) manager can correct the context-delegation association.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Given an increasing prevalence of intelligent systems capable of autonomous
actions or augmenting human activities, it is important to consider scenarios
in which the human, autonomous system, or both can exhibit failures as a result
of one of several contributing factors (e.g. perception). Failures for either
humans or autonomous agents can lead to simply a reduced performance level, or
a failure can lead to something as severe as injury or death. For our topic, we
consider the hybrid human-AI teaming case where a managing agent is tasked with
identifying when to perform a delegation assignment and whether the human or
autonomous system should gain control. In this context, the manager will
estimate its best action based on the likelihood of either (human, autonomous)
agent failure as a result of their sensing capabilities and possible
deficiencies. We model how the environmental context can contribute to, or
exacerbate, the sensing deficiencies. These contexts provide cases where the
manager must learn to attribute capabilities to suitability for
decision-making. As such, we demonstrate how a Reinforcement Learning (RL)
manager can correct the context-delegation association and assist the hybrid
team of agents in outperforming the behavior of any agent working in isolation.
Related papers
- Human Decision-making is Susceptible to AI-driven Manipulation [71.20729309185124]
AI systems may exploit users' cognitive biases and emotional vulnerabilities to steer them toward harmful outcomes.
This study examined human susceptibility to such manipulation in financial and emotional decision-making contexts.
arXiv Detail & Related papers (2025-02-11T15:56:22Z) - Emergence of human-like polarization among large language model agents [61.622596148368906]
We simulate a networked system involving thousands of large language model agents, discovering their social interactions, result in human-like polarization.
Similarities between humans and LLM agents raise concerns about their capacity to amplify societal polarization, but also hold the potential to serve as a valuable testbed for identifying plausible strategies to mitigate it.
arXiv Detail & Related papers (2025-01-09T11:45:05Z) - Collaborative Gym: A Framework for Enabling and Evaluating Human-Agent Collaboration [51.452664740963066]
Collaborative Gym is a framework enabling asynchronous, tripartite interaction among agents, humans, and task environments.
We instantiate Co-Gym with three representative tasks in both simulated and real-world conditions.
Our findings reveal that collaborative agents consistently outperform their fully autonomous counterparts in task performance.
arXiv Detail & Related papers (2024-12-20T09:21:15Z) - Causal Responsibility Attribution for Human-AI Collaboration [62.474732677086855]
This paper presents a causal framework using Structural Causal Models (SCMs) to systematically attribute responsibility in human-AI systems.
Two case studies illustrate the framework's adaptability in diverse human-AI collaboration scenarios.
arXiv Detail & Related papers (2024-11-05T17:17:45Z) - Optimizing Risk-averse Human-AI Hybrid Teams [1.433758865948252]
We propose a manager which learns, through a standard Reinforcement Learning scheme, how to best delegate.
We demonstrate the optimality of our manager's performance in several grid environments.
Our results show our manager can successfully learn desirable delegations which result in team paths near/exactly optimal.
arXiv Detail & Related papers (2024-03-13T09:49:26Z) - Enabling High-Level Machine Reasoning with Cognitive Neuro-Symbolic
Systems [67.01132165581667]
We propose to enable high-level reasoning in AI systems by integrating cognitive architectures with external neuro-symbolic components.
We illustrate a hybrid framework centered on ACT-R and we discuss the role of generative models in recent and future applications.
arXiv Detail & Related papers (2023-11-13T21:20:17Z) - Optimizing delegation between human and AI collaborative agents [1.6114012813668932]
We train a delegating manager agent to make delegation decisions with respect to potential performance deficiencies.
Our framework learns through observations of team performance without restricting agents to matching dynamics.
Our results show our manager learns to perform delegation decisions with teams of agents operating under differing representations of the environment.
arXiv Detail & Related papers (2023-09-26T07:23:26Z) - Human-AI Collaboration: The Effect of AI Delegation on Human Task
Performance and Task Satisfaction [0.0]
We show that task performance and task satisfaction improve through AI delegation.
We identify humans' increased levels of self-efficacy as the underlying mechanism for these improvements.
Our findings provide initial evidence that allowing AI models to take over more management responsibilities can be an effective form of human-AI collaboration.
arXiv Detail & Related papers (2023-03-16T11:02:46Z) - A Cognitive Framework for Delegation Between Error-Prone AI and Human
Agents [0.0]
We investigate the use of cognitively inspired models of behavior to predict the behavior of both human and AI agents.
The predicted behavior is used to delegate control between humans and AI agents through the use of an intermediary entity.
arXiv Detail & Related papers (2022-04-06T15:15:21Z) - Balancing Performance and Human Autonomy with Implicit Guidance Agent [8.071506311915396]
We show that implicit guidance is effective for enabling humans to maintain a balance between improving their plans and retaining autonomy.
We modeled a collaborative agent with implicit guidance by integrating the Bayesian Theory of Mind into existing collaborative-planning algorithms.
arXiv Detail & Related papers (2021-09-01T14:47:29Z) - Watch-And-Help: A Challenge for Social Perception and Human-AI
Collaboration [116.28433607265573]
We introduce Watch-And-Help (WAH), a challenge for testing social intelligence in AI agents.
In WAH, an AI agent needs to help a human-like agent perform a complex household task efficiently.
We build VirtualHome-Social, a multi-agent household environment, and provide a benchmark including both planning and learning based baselines.
arXiv Detail & Related papers (2020-10-19T21:48:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.