Balancing Performance and Human Autonomy with Implicit Guidance Agent
- URL: http://arxiv.org/abs/2109.00414v1
- Date: Wed, 1 Sep 2021 14:47:29 GMT
- Title: Balancing Performance and Human Autonomy with Implicit Guidance Agent
- Authors: Ryo Nakahashi and Seiji Yamada
- Abstract summary: We show that implicit guidance is effective for enabling humans to maintain a balance between improving their plans and retaining autonomy.
We modeled a collaborative agent with implicit guidance by integrating the Bayesian Theory of Mind into existing collaborative-planning algorithms.
- Score: 8.071506311915396
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The human-agent team, which is a problem in which humans and autonomous
agents collaborate to achieve one task, is typical in human-AI collaboration.
For effective collaboration, humans want to have an effective plan, but in
realistic situations, they might have difficulty calculating the best plan due
to cognitive limitations. In this case, guidance from an agent that has many
computational resources may be useful. However, if an agent guides the human
behavior explicitly, the human may feel that they have lost autonomy and are
being controlled by the agent. We therefore investigated implicit guidance
offered by means of an agent's behavior. With this type of guidance, the agent
acts in a way that makes it easy for the human to find an effective plan for a
collaborative task, and the human can then improve the plan. Since the human
improves their plan voluntarily, he or she maintains autonomy. We modeled a
collaborative agent with implicit guidance by integrating the Bayesian Theory
of Mind into existing collaborative-planning algorithms and demonstrated
through a behavioral experiment that implicit guidance is effective for
enabling humans to maintain a balance between improving their plans and
retaining autonomy.
Related papers
- Learning to Assist Humans without Inferring Rewards [65.28156318196397]
We build upon prior work that studies assistance through the lens of empowerment.
An assistive agent aims to maximize the influence of the human's actions.
We prove that these representations estimate a similar notion of empowerment to that studied by prior work.
arXiv Detail & Related papers (2024-11-04T21:31:04Z) - On the Utility of Accounting for Human Beliefs about AI Intention in Human-AI Collaboration [9.371527955300323]
We develop a model of human beliefs that captures how humans interpret and reason about their AI partner's intentions.
We create an AI agent that incorporates both human behavior and human beliefs when devising its strategy for interacting with humans.
arXiv Detail & Related papers (2024-06-10T06:39:37Z) - Mixed-Initiative Human-Robot Teaming under Suboptimality with Online Bayesian Adaptation [0.6591036379613505]
We develop computational modeling and optimization techniques for enhancing the performance of suboptimal human-agent teams.
We adopt an online Bayesian approach that enables a robot to infer people's willingness to comply with its assistance in a sequential decision-making game.
Our user studies show that user preferences and team performance indeed vary with robot intervention styles.
arXiv Detail & Related papers (2024-03-24T14:38:18Z) - ProAgent: Building Proactive Cooperative Agents with Large Language
Models [89.53040828210945]
ProAgent is a novel framework that harnesses large language models to create proactive agents.
ProAgent can analyze the present state, and infer the intentions of teammates from observations.
ProAgent exhibits a high degree of modularity and interpretability, making it easily integrated into various coordination scenarios.
arXiv Detail & Related papers (2023-08-22T10:36:56Z) - Investigating Agency of LLMs in Human-AI Collaboration Tasks [24.562034082480608]
We build on social-cognitive theory to develop a framework of features through which Agency is expressed in dialogue.
We collect a new dataset of 83 human-human collaborative interior design conversations.
arXiv Detail & Related papers (2023-05-22T08:17:14Z) - Learning to Influence Human Behavior with Offline Reinforcement Learning [70.7884839812069]
We focus on influence in settings where there is a need to capture human suboptimality.
Experiments online with humans is potentially unsafe, and creating a high-fidelity simulator of the environment is often impractical.
We show that offline reinforcement learning can learn to effectively influence suboptimal humans by extending and combining elements of observed human-human behavior.
arXiv Detail & Related papers (2023-03-03T23:41:55Z) - PECAN: Leveraging Policy Ensemble for Context-Aware Zero-Shot Human-AI
Coordination [52.991211077362586]
We propose a policy ensemble method to increase the diversity of partners in the population.
We then develop a context-aware method enabling the ego agent to analyze and identify the partner's potential policy primitives.
In this way, the ego agent is able to learn more universal cooperative behaviors for collaborating with diverse partners.
arXiv Detail & Related papers (2023-01-16T12:14:58Z) - Robust Planning for Human-Robot Joint Tasks with Explicit Reasoning on
Human Mental State [2.8246074016493457]
We consider the human-aware task planning problem where a human-robot team is given a shared task with a known objective to achieve.
Recent approaches tackle it by modeling it as a team of independent, rational agents, where the robot plans for both agents' (shared) tasks.
We describe a novel approach to solve such problems, which models and uses execution-time observability conventions.
arXiv Detail & Related papers (2022-10-17T09:21:00Z) - A Cognitive Framework for Delegation Between Error-Prone AI and Human
Agents [0.0]
We investigate the use of cognitively inspired models of behavior to predict the behavior of both human and AI agents.
The predicted behavior is used to delegate control between humans and AI agents through the use of an intermediary entity.
arXiv Detail & Related papers (2022-04-06T15:15:21Z) - AGENT: A Benchmark for Core Psychological Reasoning [60.35621718321559]
Intuitive psychology is the ability to reason about hidden mental variables that drive observable actions.
Despite recent interest in machine agents that reason about other agents, it is not clear if such agents learn or hold the core psychology principles that drive human reasoning.
We present a benchmark consisting of procedurally generated 3D animations, AGENT, structured around four scenarios.
arXiv Detail & Related papers (2021-02-24T14:58:23Z) - Watch-And-Help: A Challenge for Social Perception and Human-AI
Collaboration [116.28433607265573]
We introduce Watch-And-Help (WAH), a challenge for testing social intelligence in AI agents.
In WAH, an AI agent needs to help a human-like agent perform a complex household task efficiently.
We build VirtualHome-Social, a multi-agent household environment, and provide a benchmark including both planning and learning based baselines.
arXiv Detail & Related papers (2020-10-19T21:48:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.