Online Learning of Human Constraints from Feedback in Shared Autonomy
- URL: http://arxiv.org/abs/2403.02974v1
- Date: Tue, 5 Mar 2024 13:53:48 GMT
- Title: Online Learning of Human Constraints from Feedback in Shared Autonomy
- Authors: Shibei Zhu, Tran Nguyen Le, Samuel Kaski, Ville Kyrki
- Abstract summary: Real-time collaboration with humans poses challenges due to the different behavior patterns of humans resulting from diverse physical constraints.
We learn a human constraints model that considers the diverse behaviors of different human operators.
We propose an augmentative assistant agent capable of learning and adapting to human physical constraints.
- Score: 25.173950581816086
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Real-time collaboration with humans poses challenges due to the different
behavior patterns of humans resulting from diverse physical constraints.
Existing works typically focus on learning safety constraints for
collaboration, or how to divide and distribute the subtasks between the
participating agents to carry out the main task. In contrast, we propose to
learn a human constraints model that, in addition, considers the diverse
behaviors of different human operators. We consider a type of collaboration in
a shared-autonomy fashion, where both a human operator and an assistive robot
act simultaneously in the same task space that affects each other's actions.
The task of the assistive agent is to augment the skill of humans to perform a
shared task by supporting humans as much as possible, both in terms of reducing
the workload and minimizing the discomfort for the human operator. Therefore,
we propose an augmentative assistant agent capable of learning and adapting to
human physical constraints, aligning its actions with the ergonomic preferences
and limitations of the human operator.
Related papers
- Constrained Human-AI Cooperation: An Inclusive Embodied Social Intelligence Challenge [47.74313897705183]
CHAIC is an inclusive embodied social intelligence challenge designed to test social perception and cooperation in embodied agents.
In CHAIC, the goal is for an embodied agent equipped with egocentric observations to assist a human who may be operating under physical constraints.
We benchmark planning- and learning-based baselines on the challenge and introduce a new method that leverages large language models and behavior modeling.
arXiv Detail & Related papers (2024-11-04T04:41:12Z) - CooHOI: Learning Cooperative Human-Object Interaction with Manipulated Object Dynamics [44.30880626337739]
CooHOI is a framework designed to tackle the challenge of multi-humanoid object transportation problem.
A single humanoid character learns to interact with objects through imitation learning from human motion priors.
Then, the humanoid learns to collaborate with others by considering the shared dynamics of the manipulated object.
arXiv Detail & Related papers (2024-06-20T17:59:22Z) - Habitat 3.0: A Co-Habitat for Humans, Avatars and Robots [119.55240471433302]
Habitat 3.0 is a simulation platform for studying collaborative human-robot tasks in home environments.
It addresses challenges in modeling complex deformable bodies and diversity in appearance and motion.
Human-in-the-loop infrastructure enables real human interaction with simulated robots via mouse/keyboard or a VR interface.
arXiv Detail & Related papers (2023-10-19T17:29:17Z) - PECAN: Leveraging Policy Ensemble for Context-Aware Zero-Shot Human-AI
Coordination [52.991211077362586]
We propose a policy ensemble method to increase the diversity of partners in the population.
We then develop a context-aware method enabling the ego agent to analyze and identify the partner's potential policy primitives.
In this way, the ego agent is able to learn more universal cooperative behaviors for collaborating with diverse partners.
arXiv Detail & Related papers (2023-01-16T12:14:58Z) - Coordination with Humans via Strategy Matching [5.072077366588174]
We present an algorithm for autonomously recognizing available task-completion strategies by observing human-human teams performing a collaborative task.
By transforming team actions into low dimensional representations using hidden Markov models, we can identify strategies without prior knowledge.
Robot policies are learned on each of the identified strategies to construct a Mixture-of-Experts model that adapts to the task strategies of unseen human partners.
arXiv Detail & Related papers (2022-10-27T01:00:50Z) - Learning Action Duration and Synergy in Task Planning for Human-Robot
Collaboration [6.373435464104705]
The duration of an action depends on agents' capabilities and the correlation between actions performed simultaneously by the human and the robot.
This paper proposes an approach to learning actions' costs and coupling between actions executed concurrently by humans and robots.
arXiv Detail & Related papers (2022-10-21T01:08:11Z) - Increased Complexity of a Human-Robot Collaborative Task May Increase
the Need for a Socially Competent Robot [0.0]
This study investigates how task complexity affects human perception and acceptance of their robot partner.
We propose a human-based robot control model for obstacle avoidance that can account for the leader-follower dynamics.
arXiv Detail & Related papers (2022-07-11T11:43:27Z) - Dynamic Human-Robot Role Allocation based on Human Ergonomics Risk
Prediction and Robot Actions Adaptation [35.91053423341299]
We propose a novel method that optimize assembly strategies and distribute the effort among the workers in human-robot cooperative tasks.
The proposed approach succeeds in controlling the task allocation process to ensure safe and ergonomic conditions for the human worker.
arXiv Detail & Related papers (2021-11-05T17:29:41Z) - Co-GAIL: Learning Diverse Strategies for Human-Robot Collaboration [51.268988527778276]
We present a method for learning a human-robot collaboration policy from human-human collaboration demonstrations.
Our method co-optimizes a human policy and a robot policy in an interactive learning process.
arXiv Detail & Related papers (2021-08-13T03:14:43Z) - Show Me What You Can Do: Capability Calibration on Reachable Workspace
for Human-Robot Collaboration [83.4081612443128]
We show that a short calibration using REMP can effectively bridge the gap between what a non-expert user thinks a robot can reach and the ground-truth.
We show that this calibration procedure not only results in better user perception, but also promotes more efficient human-robot collaborations.
arXiv Detail & Related papers (2021-03-06T09:14:30Z) - Joint Mind Modeling for Explanation Generation in Complex Human-Robot
Collaborative Tasks [83.37025218216888]
We propose a novel explainable AI (XAI) framework for achieving human-like communication in human-robot collaborations.
The robot builds a hierarchical mind model of the human user and generates explanations of its own mind as a form of communications.
Results show that the generated explanations of our approach significantly improves the collaboration performance and user perception of the robot.
arXiv Detail & Related papers (2020-07-24T23:35:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.