Mixed-Initiative Human-Robot Teaming under Suboptimality with Online Bayesian Adaptation
- URL: http://arxiv.org/abs/2403.16178v1
- Date: Sun, 24 Mar 2024 14:38:18 GMT
- Title: Mixed-Initiative Human-Robot Teaming under Suboptimality with Online Bayesian Adaptation
- Authors: Manisha Natarajan, Chunyue Xue, Sanne van Waveren, Karen Feigh, Matthew Gombolay,
- Abstract summary: We develop computational modeling and optimization techniques for enhancing the performance of suboptimal human-agent teams.
We adopt an online Bayesian approach that enables a robot to infer people's willingness to comply with its assistance in a sequential decision-making game.
Our user studies show that user preferences and team performance indeed vary with robot intervention styles.
- Score: 0.6591036379613505
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: For effective human-agent teaming, robots and other artificial intelligence (AI) agents must infer their human partner's abilities and behavioral response patterns and adapt accordingly. Most prior works make the unrealistic assumption that one or more teammates can act near-optimally. In real-world collaboration, humans and autonomous agents can be suboptimal, especially when each only has partial domain knowledge. In this work, we develop computational modeling and optimization techniques for enhancing the performance of suboptimal human-agent teams, where the human and the agent have asymmetric capabilities and act suboptimally due to incomplete environmental knowledge. We adopt an online Bayesian approach that enables a robot to infer people's willingness to comply with its assistance in a sequential decision-making game. Our user studies show that user preferences and team performance indeed vary with robot intervention styles, and our approach for mixed-initiative collaborations enhances objective team performance ($p<.001$) and subjective measures, such as user's trust ($p<.001$) and perceived likeability of the robot ($p<.001$).
Related papers
- Dreaming to Assist: Learning to Align with Human Objectives for Shared Control in High-Speed Racing [10.947581892636629]
Tight coordination is required for effective human-robot teams in domains involving fast dynamics and tactical decisions.
We present Dream2Assist, a framework that combines a rich world model able to infer human objectives and value functions.
We show that the combined human-robot team, when blending its actions with those of the human, outperforms the synthetic humans alone.
arXiv Detail & Related papers (2024-10-14T01:00:46Z) - AgentCF: Collaborative Learning with Autonomous Language Agents for
Recommender Systems [112.76941157194544]
We propose AgentCF for simulating user-item interactions in recommender systems through agent-based collaborative filtering.
We creatively consider not only users but also items as agents, and develop a collaborative learning approach that optimize both kinds of agents together.
Overall, the optimized agents exhibit diverse interaction behaviors within our framework, including user-item, user-user, item-item, and collective interactions.
arXiv Detail & Related papers (2023-10-13T16:37:14Z) - ProAgent: Building Proactive Cooperative Agents with Large Language
Models [89.53040828210945]
ProAgent is a novel framework that harnesses large language models to create proactive agents.
ProAgent can analyze the present state, and infer the intentions of teammates from observations.
ProAgent exhibits a high degree of modularity and interpretability, making it easily integrated into various coordination scenarios.
arXiv Detail & Related papers (2023-08-22T10:36:56Z) - Learning to Influence Human Behavior with Offline Reinforcement Learning [70.7884839812069]
We focus on influence in settings where there is a need to capture human suboptimality.
Experiments online with humans is potentially unsafe, and creating a high-fidelity simulator of the environment is often impractical.
We show that offline reinforcement learning can learn to effectively influence suboptimal humans by extending and combining elements of observed human-human behavior.
arXiv Detail & Related papers (2023-03-03T23:41:55Z) - Intuitive and Efficient Human-robot Collaboration via Real-time
Approximate Bayesian Inference [4.310882094628194]
Collaborative robots and end-to-end AI, promises flexible automation of human tasks in factories and warehouses.
Humans and cobots will collaborate helping each other.
For these collaborations to be effective and safe, robots need to model, predict and exploit human's intents.
arXiv Detail & Related papers (2022-05-17T23:04:44Z) - Adaptive Agent Architecture for Real-time Human-Agent Teaming [3.284216428330814]
It is critical that agents infer human intent and adapt their polices for smooth coordination.
Most literature in human-agent teaming builds agents referencing a learned human model.
We propose a novel adaptive agent architecture in human-model-free setting on a two-player cooperative game.
arXiv Detail & Related papers (2021-03-07T20:08:09Z) - Show Me What You Can Do: Capability Calibration on Reachable Workspace
for Human-Robot Collaboration [83.4081612443128]
We show that a short calibration using REMP can effectively bridge the gap between what a non-expert user thinks a robot can reach and the ground-truth.
We show that this calibration procedure not only results in better user perception, but also promotes more efficient human-robot collaborations.
arXiv Detail & Related papers (2021-03-06T09:14:30Z) - Getting to Know One Another: Calibrating Intent, Capabilities and Trust
for Human-Robot Collaboration [13.895990928770459]
We focus on scenarios where the robot is attempting to assist a human who is unable to directly communicate her intent.
We adopt a decision-theoretic approach and propose the TICC-POMDP for modeling this setting.
Experiments show our approach leads to better team performance both in simulation and in a real-world study with human subjects.
arXiv Detail & Related papers (2020-08-03T08:04:15Z) - Joint Mind Modeling for Explanation Generation in Complex Human-Robot
Collaborative Tasks [83.37025218216888]
We propose a novel explainable AI (XAI) framework for achieving human-like communication in human-robot collaborations.
The robot builds a hierarchical mind model of the human user and generates explanations of its own mind as a form of communications.
Results show that the generated explanations of our approach significantly improves the collaboration performance and user perception of the robot.
arXiv Detail & Related papers (2020-07-24T23:35:03Z) - Is the Most Accurate AI the Best Teammate? Optimizing AI for Teamwork [54.309495231017344]
We argue that AI systems should be trained in a human-centered manner, directly optimized for team performance.
We study this proposal for a specific type of human-AI teaming, where the human overseer chooses to either accept the AI recommendation or solve the task themselves.
Our experiments with linear and non-linear models on real-world, high-stakes datasets show that the most accuracy AI may not lead to highest team performance.
arXiv Detail & Related papers (2020-04-27T19:06:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.