Planning for Proactive Assistance in Environments with Partial
Observability
- URL: http://arxiv.org/abs/2105.00525v1
- Date: Sun, 2 May 2021 18:12:06 GMT
- Title: Planning for Proactive Assistance in Environments with Partial
Observability
- Authors: Anagha Kulkarni, Siddharth Srivastava and Subbarao Kambhampati
- Abstract summary: This paper addresses the problem of synthesizing the behavior of an AI agent that provides proactive task assistance to a human.
It is crucial for the agent to ensure that the human is aware of how the assistance affects her task.
- Score: 26.895668587111757
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper addresses the problem of synthesizing the behavior of an AI agent
that provides proactive task assistance to a human in settings like factory
floors where they may coexist in a common environment. Unlike in the case of
requested assistance, the human may not be expecting proactive assistance and
hence it is crucial for the agent to ensure that the human is aware of how the
assistance affects her task. This becomes harder when there is a possibility
that the human may neither have full knowledge of the AI agent's capabilities
nor have full observability of its activities. Therefore, our \textit{proactive
assistant} is guided by the following three principles: \textbf{(1)} its
activity decreases the human's cost towards her goal; \textbf{(2)} the human is
able to recognize the potential reduction in her cost; \textbf{(3)} its
activity optimizes the human's overall cost (time/resources) of achieving her
goal. Through empirical evaluation and user studies, we demonstrate the
usefulness of our approach.
Related papers
- Learning to Assist Humans without Inferring Rewards [65.28156318196397]
We build upon prior work that studies assistance through the lens of empowerment.
An assistive agent aims to maximize the influence of the human's actions.
We prove that these representations estimate a similar notion of empowerment to that studied by prior work.
arXiv Detail & Related papers (2024-11-04T21:31:04Z) - Constrained Human-AI Cooperation: An Inclusive Embodied Social Intelligence Challenge [47.74313897705183]
CHAIC is an inclusive embodied social intelligence challenge designed to test social perception and cooperation in embodied agents.
In CHAIC, the goal is for an embodied agent equipped with egocentric observations to assist a human who may be operating under physical constraints.
We benchmark planning- and learning-based baselines on the challenge and introduce a new method that leverages large language models and behavior modeling.
arXiv Detail & Related papers (2024-11-04T04:41:12Z) - Towards Human-centered Proactive Conversational Agents [60.57226361075793]
The distinction between a proactive and a reactive system lies in the proactive system's initiative-taking nature.
We establish a new taxonomy concerning three key dimensions of human-centered PCAs, namely Intelligence, Adaptivity, and Civility.
arXiv Detail & Related papers (2024-04-19T07:14:31Z) - Smart Help: Strategic Opponent Modeling for Proactive and Adaptive Robot Assistance in Households [30.33911147366425]
Smart Help aims to provide proactive yet adaptive support to human agents with diverse disabilities.
We introduce an innovative opponent modeling module that provides a nuanced understanding of the main agent's capabilities and goals.
Our findings illustrate the potential of AI-imbued assistive robots in improving the well-being of vulnerable groups.
arXiv Detail & Related papers (2024-04-13T13:03:59Z) - On the Quest for Effectiveness in Human Oversight: Interdisciplinary Perspectives [1.29622145730471]
Human oversight is currently discussed as a potential safeguard to counter some of the negative aspects of high-risk AI applications.
This paper investigates effective human oversight by synthesizing insights from psychological, legal, philosophical, and technical domains.
arXiv Detail & Related papers (2024-04-05T12:31:19Z) - Improving Grounded Language Understanding in a Collaborative Environment
by Interacting with Agents Through Help Feedback [42.19685958922537]
We argue that human-AI collaboration should be interactive, with humans monitoring the work of AI agents and providing feedback that the agent can understand and utilize.
In this work, we explore these directions using the challenging task defined by the IGLU competition, an interactive grounded language understanding task in a MineCraft-like world.
arXiv Detail & Related papers (2023-04-21T05:37:59Z) - Reinforcement Learning with Efficient Active Feature Acquisition [59.91808801541007]
In real-life, information acquisition might correspond to performing a medical test on a patient.
We propose a model-based reinforcement learning framework that learns an active feature acquisition policy.
Key to the success is a novel sequential variational auto-encoder that learns high-quality representations from partially observed states.
arXiv Detail & Related papers (2020-11-02T08:46:27Z) - Watch-And-Help: A Challenge for Social Perception and Human-AI
Collaboration [116.28433607265573]
We introduce Watch-And-Help (WAH), a challenge for testing social intelligence in AI agents.
In WAH, an AI agent needs to help a human-like agent perform a complex household task efficiently.
We build VirtualHome-Social, a multi-agent household environment, and provide a benchmark including both planning and learning based baselines.
arXiv Detail & Related papers (2020-10-19T21:48:31Z) - AvE: Assistance via Empowerment [77.08882807208461]
We propose a new paradigm for assistance by instead increasing the human's ability to control their environment.
This task-agnostic objective preserves the person's autonomy and ability to achieve any eventual state.
arXiv Detail & Related papers (2020-06-26T04:40:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.