Learning Action Duration and Synergy in Task Planning for Human-Robot
Collaboration
- URL: http://arxiv.org/abs/2210.11660v1
- Date: Fri, 21 Oct 2022 01:08:11 GMT
- Title: Learning Action Duration and Synergy in Task Planning for Human-Robot
Collaboration
- Authors: Samuele Sandrini and Marco Faroni and Nicola Pedrocchi
- Abstract summary: The duration of an action depends on agents' capabilities and the correlation between actions performed simultaneously by the human and the robot.
This paper proposes an approach to learning actions' costs and coupling between actions executed concurrently by humans and robots.
- Score: 6.373435464104705
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A good estimation of the actions' cost is key in task planning for
human-robot collaboration. The duration of an action depends on agents'
capabilities and the correlation between actions performed simultaneously by
the human and the robot. This paper proposes an approach to learning actions'
costs and coupling between actions executed concurrently by humans and robots.
We leverage the information from past executions to learn the average duration
of each action and a synergy coefficient representing the effect of an action
performed by the human on the duration of the action performed by the robot
(and vice versa). We implement the proposed method in a simulated scenario
where both agents can access the same area simultaneously. Safety measures
require the robot to slow down when the human is close, denoting a bad synergy
of tasks operating in the same area. We show that our approach can learn such
bad couplings so that a task planner can leverage this information to find
better plans.
Related papers
- An Epistemic Human-Aware Task Planner which Anticipates Human Beliefs and Decisions [8.309981857034902]
The aim is to build a robot policy that accounts for uncontrollable human behaviors.
We propose a novel planning framework and build a solver based on AND-OR search.
Preliminary experiments in two domains, one novel and one adapted, demonstrate the effectiveness of the framework.
arXiv Detail & Related papers (2024-09-27T08:27:36Z) - Human-Agent Joint Learning for Efficient Robot Manipulation Skill Acquisition [48.65867987106428]
We introduce a novel system for joint learning between human operators and robots.
It enables human operators to share control of a robot end-effector with a learned assistive agent.
It reduces the need for human adaptation while ensuring the collected data is of sufficient quality for downstream tasks.
arXiv Detail & Related papers (2024-06-29T03:37:29Z) - Optimal task and motion planning and execution for human-robot
multi-agent systems in dynamic environments [54.39292848359306]
We propose a combined task and motion planning approach to optimize sequencing, assignment, and execution of tasks.
The framework relies on decoupling tasks and actions, where an action is one possible geometric realization of a symbolic task.
We demonstrate the approach effectiveness in a collaborative manufacturing scenario, in which a robotic arm and a human worker shall assemble a mosaic.
arXiv Detail & Related papers (2023-03-27T01:50:45Z) - Preemptive Motion Planning for Human-to-Robot Indirect Placement
Handovers [12.827398121150386]
Human-to-robot handovers can take either of two approaches: (1) direct hand-to-hand or (2) indirect hand-to-placement-to-pick-up.
To minimize such idle time, the robot must preemptively predict the human intent of where the object will be placed.
We introduce a novel prediction-planning pipeline that allows the robot to preemptively move towards the human agent's intended placement location.
arXiv Detail & Related papers (2022-03-01T00:21:39Z) - Co-GAIL: Learning Diverse Strategies for Human-Robot Collaboration [51.268988527778276]
We present a method for learning a human-robot collaboration policy from human-human collaboration demonstrations.
Our method co-optimizes a human policy and a robot policy in an interactive learning process.
arXiv Detail & Related papers (2021-08-13T03:14:43Z) - Show Me What You Can Do: Capability Calibration on Reachable Workspace
for Human-Robot Collaboration [83.4081612443128]
We show that a short calibration using REMP can effectively bridge the gap between what a non-expert user thinks a robot can reach and the ground-truth.
We show that this calibration procedure not only results in better user perception, but also promotes more efficient human-robot collaborations.
arXiv Detail & Related papers (2021-03-06T09:14:30Z) - Human-Robot Team Coordination with Dynamic and Latent Human Task
Proficiencies: Scheduling with Learning Curves [0.0]
We introduce a novel resource coordination that enables robots to explore the relative strengths and learning abilities of their human teammates.
We generate and evaluate a robust schedule while discovering the latest individual worker proficiency.
Results indicate that scheduling strategies favoring exploration tend to be beneficial for human-robot collaboration.
arXiv Detail & Related papers (2020-07-03T19:44:22Z) - Supportive Actions for Manipulation in Human-Robot Coworker Teams [15.978389978586414]
We term actions that support interaction by reducing future interference with others as supportive robot actions.
We compare two robot modes in a shared table pick-and-place task: (1) Task-oriented: the robot only takes actions to further its own task objective and (2) Supportive: the robot sometimes prefers supportive actions to task-oriented ones.
Our experiments in simulation, using a simplified human model, reveal that supportive actions reduce the interference between agents, especially in more difficult tasks, but also cause the robot to take longer to complete the task.
arXiv Detail & Related papers (2020-05-02T09:37:10Z) - Thinking While Moving: Deep Reinforcement Learning with Concurrent
Control [122.49572467292293]
We study reinforcement learning in settings where sampling an action from the policy must be done concurrently with the time evolution of the controlled system.
Much like a person or an animal, the robot must think and move at the same time, deciding on its next action before the previous one has completed.
arXiv Detail & Related papers (2020-04-13T17:49:29Z) - Human Grasp Classification for Reactive Human-to-Robot Handovers [50.91803283297065]
We propose an approach for human-to-robot handovers in which the robot meets the human halfway.
We collect a human grasp dataset which covers typical ways of holding objects with various hand shapes and poses.
We present a planning and execution approach that takes the object from the human hand according to the detected grasp and hand position.
arXiv Detail & Related papers (2020-03-12T19:58:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.