Adaptation and Communication in Human-Robot Teaming to Handle
Discrepancies in Agents' Beliefs about Plans
- URL: http://arxiv.org/abs/2307.03362v1
- Date: Fri, 7 Jul 2023 03:05:34 GMT
- Title: Adaptation and Communication in Human-Robot Teaming to Handle
Discrepancies in Agents' Beliefs about Plans
- Authors: Yuening Zhang, Brian C. Williams
- Abstract summary: We provide an online execution algorithm based on Monte Carlo Tree Search for the agent to plan its action.
We show that our agent is better equipped to work in teams without the guarantee of a shared mental model.
- Score: 13.637799815698559
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: When agents collaborate on a task, it is important that they have some shared
mental model of the task routines -- the set of feasible plans towards
achieving the goals. However, in reality, situations often arise that such a
shared mental model cannot be guaranteed, such as in ad-hoc teams where agents
may follow different conventions or when contingent constraints arise that only
some agents are aware of. Previous work on human-robot teaming has assumed that
the team has a set of shared routines, which breaks down in these situations.
In this work, we leverage epistemic logic to enable agents to understand the
discrepancy in each other's beliefs about feasible plans and dynamically plan
their actions to adapt or communicate to resolve the discrepancy. We propose a
formalism that extends conditional doxastic logic to describe knowledge bases
in order to explicitly represent agents' nested beliefs on the feasible plans
and state of execution. We provide an online execution algorithm based on Monte
Carlo Tree Search for the agent to plan its action, including communication
actions to explain the feasibility of plans, announce intent, and ask
questions. Finally, we evaluate the success rate and scalability of the
algorithm and show that our agent is better equipped to work in teams without
the guarantee of a shared mental model.
Related papers
- Ask-before-Plan: Proactive Language Agents for Real-World Planning [68.08024918064503]
Proactive Agent Planning requires language agents to predict clarification needs based on user-agent conversation and agent-environment interaction.
We propose a novel multi-agent framework, Clarification-Execution-Planning (textttCEP), which consists of three agents specialized in clarification, execution, and planning.
arXiv Detail & Related papers (2024-06-18T14:07:28Z) - Anticipate & Collab: Data-driven Task Anticipation and Knowledge-driven Planning for Human-robot Collaboration [13.631341660350028]
An agent assisting humans in daily living activities can collaborate more effectively by anticipating upcoming tasks.
Data-driven methods represent the state of the art in task anticipation, planning, and related problems, but these methods are resource-hungry and opaque.
This paper describes DaTAPlan, our framework that significantly extends our prior work toward human-robot collaboration.
arXiv Detail & Related papers (2024-04-04T16:52:48Z) - Pragmatic Instruction Following and Goal Assistance via Cooperative
Language-Guided Inverse Planning [52.91457780361305]
This paper introduces cooperative language-guided inverse plan search (CLIPS)
Our agent assists a human by modeling them as a cooperative planner who communicates joint plans to the assistant.
We evaluate these capabilities in two cooperative planning domains (Doors, Keys & Gems and VirtualHome)
arXiv Detail & Related papers (2024-02-27T23:06:53Z) - ProAgent: Building Proactive Cooperative Agents with Large Language
Models [89.53040828210945]
ProAgent is a novel framework that harnesses large language models to create proactive agents.
ProAgent can analyze the present state, and infer the intentions of teammates from observations.
ProAgent exhibits a high degree of modularity and interpretability, making it easily integrated into various coordination scenarios.
arXiv Detail & Related papers (2023-08-22T10:36:56Z) - Anticipating Responsibility in Multiagent Planning [9.686474898346392]
Responsibility anticipation is a process of determining if the actions of an individual agent may cause it to be responsible for a particular outcome.
This can be used in a multi-agent planning setting to allow agents to anticipate responsibility in the plans they consider.
arXiv Detail & Related papers (2023-07-31T13:58:49Z) - Inferring the Goals of Communicating Agents from Actions and
Instructions [47.5816320484482]
We introduce a model of a cooperative team where one agent, the principal, may communicate natural language instructions about their shared plan to another agent, the assistant.
We show how a third person observer can infer the team's goal via multi-modal inverse planning from actions and instructions.
We evaluate this approach by comparing it with human goal inferences in a multi-agent gridworld, finding that our model's inferences closely correlate with human judgments.
arXiv Detail & Related papers (2023-06-28T13:43:46Z) - Robust Planning for Human-Robot Joint Tasks with Explicit Reasoning on
Human Mental State [2.8246074016493457]
We consider the human-aware task planning problem where a human-robot team is given a shared task with a known objective to achieve.
Recent approaches tackle it by modeling it as a team of independent, rational agents, where the robot plans for both agents' (shared) tasks.
We describe a novel approach to solve such problems, which models and uses execution-time observability conventions.
arXiv Detail & Related papers (2022-10-17T09:21:00Z) - Efficient Multi-agent Epistemic Planning: Teaching Planners About Nested
Belief [27.524600740450126]
We plan from the perspective of a single agent with the potential for goals and actions that involve nested beliefs, non-homogeneous agents, co-present observations, and the ability for one agent to reason as if it were another.
Our approach represents an important step towards applying the well-established field of automated planning to the challenging task of planning involving nested beliefs of multiple agents.
arXiv Detail & Related papers (2021-10-06T03:24:01Z) - Collaborative Human-Agent Planning for Resilience [5.2123460114614435]
We investigate whether people can collaborate with agents by providing their knowledge to an agent using linear temporal logic (LTL) at run-time.
We present 24 participants with baseline plans for situations in which a planner had limitations, and asked the participants for workarounds for these limitations.
Results show that participants' constraints improved the expected return of the plans by 10%.
arXiv Detail & Related papers (2021-04-29T03:21:31Z) - Too many cooks: Bayesian inference for coordinating multi-agent
collaboration [55.330547895131986]
Collaboration requires agents to coordinate their behavior on the fly.
Underlying the human ability to collaborate is theory-of-mind, the ability to infer the hidden mental states that drive others to act.
We develop Bayesian Delegation, a decentralized multi-agent learning mechanism with these abilities.
arXiv Detail & Related papers (2020-03-26T07:43:13Z) - Model-based Reinforcement Learning for Decentralized Multiagent
Rendezvous [66.6895109554163]
Underlying the human ability to align goals with other agents is their ability to predict the intentions of others and actively update their own plans.
We propose hierarchical predictive planning (HPP), a model-based reinforcement learning method for decentralized multiagent rendezvous.
arXiv Detail & Related papers (2020-03-15T19:49:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.