Preemptive Motion Planning for Human-to-Robot Indirect Placement
Handovers
- URL: http://arxiv.org/abs/2203.00156v3
- Date: Tue, 20 Feb 2024 02:33:12 GMT
- Title: Preemptive Motion Planning for Human-to-Robot Indirect Placement
Handovers
- Authors: Andrew Choi, Mohammad Khalid Jawed, and Jungseock Joo
- Abstract summary: Human-to-robot handovers can take either of two approaches: (1) direct hand-to-hand or (2) indirect hand-to-placement-to-pick-up.
To minimize such idle time, the robot must preemptively predict the human intent of where the object will be placed.
We introduce a novel prediction-planning pipeline that allows the robot to preemptively move towards the human agent's intended placement location.
- Score: 12.827398121150386
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: As technology advances, the need for safe, efficient, and collaborative
human-robot-teams has become increasingly important. One of the most
fundamental collaborative tasks in any setting is the object handover.
Human-to-robot handovers can take either of two approaches: (1) direct
hand-to-hand or (2) indirect hand-to-placement-to-pick-up. The latter approach
ensures minimal contact between the human and robot but can also result in
increased idle time due to having to wait for the object to first be placed
down on a surface. To minimize such idle time, the robot must preemptively
predict the human intent of where the object will be placed. Furthermore, for
the robot to preemptively act in any sort of productive manner, predictions and
motion planning must occur in real-time. We introduce a novel
prediction-planning pipeline that allows the robot to preemptively move towards
the human agent's intended placement location using gaze and gestures as model
inputs. In this paper, we investigate the performance and drawbacks of our
early intent predictor-planner as well as the practical benefits of using such
a pipeline through a human-robot case study.
Related papers
- Real-time Addressee Estimation: Deployment of a Deep-Learning Model on
the iCub Robot [52.277579221741746]
Addressee Estimation is a skill essential for social robots to interact smoothly with humans.
Inspired by human perceptual skills, a deep-learning model for Addressee Estimation is designed, trained, and deployed on an iCub robot.
The study presents the procedure of such implementation and the performance of the model deployed in real-time human-robot interaction.
arXiv Detail & Related papers (2023-11-09T13:01:21Z) - ImitationNet: Unsupervised Human-to-Robot Motion Retargeting via Shared Latent Space [9.806227900768926]
This paper introduces a novel deep-learning approach for human-to-robot motion.
Our method does not require paired human-to-robot data, which facilitates its translation to new robots.
Our model outperforms existing works regarding human-to-robot similarity in terms of efficiency and precision.
arXiv Detail & Related papers (2023-09-11T08:55:04Z) - Learning Human-to-Robot Handovers from Point Clouds [63.18127198174958]
We propose the first framework to learn control policies for vision-based human-to-robot handovers.
We show significant performance gains over baselines on a simulation benchmark, sim-to-sim transfer and sim-to-real transfer.
arXiv Detail & Related papers (2023-03-30T17:58:36Z) - HERD: Continuous Human-to-Robot Evolution for Learning from Human
Demonstration [57.045140028275036]
We show that manipulation skills can be transferred from a human to a robot through the use of micro-evolutionary reinforcement learning.
We propose an algorithm for multi-dimensional evolution path searching that allows joint optimization of both the robot evolution path and the policy.
arXiv Detail & Related papers (2022-12-08T15:56:13Z) - Probabilistic Human Motion Prediction via A Bayesian Neural Network [71.16277790708529]
We propose a probabilistic model for human motion prediction in this paper.
Our model could generate several future motions when given an observed motion sequence.
We extensively validate our approach on a large scale benchmark dataset Human3.6m.
arXiv Detail & Related papers (2021-07-14T09:05:33Z) - Show Me What You Can Do: Capability Calibration on Reachable Workspace
for Human-Robot Collaboration [83.4081612443128]
We show that a short calibration using REMP can effectively bridge the gap between what a non-expert user thinks a robot can reach and the ground-truth.
We show that this calibration procedure not only results in better user perception, but also promotes more efficient human-robot collaborations.
arXiv Detail & Related papers (2021-03-06T09:14:30Z) - Minimizing Robot Navigation-Graph For Position-Based Predictability By
Humans [20.13307800821161]
In situations where humans and robots are moving in the same space whilst performing their own tasks, predictable paths are vital.
The cognitive effort for the human to predict the robot's path becomes untenable as the number of robots increases.
We propose to minimize the navigation-graph of the robot for position-based predictability.
arXiv Detail & Related papers (2020-10-28T22:09:10Z) - Supportive Actions for Manipulation in Human-Robot Coworker Teams [15.978389978586414]
We term actions that support interaction by reducing future interference with others as supportive robot actions.
We compare two robot modes in a shared table pick-and-place task: (1) Task-oriented: the robot only takes actions to further its own task objective and (2) Supportive: the robot sometimes prefers supportive actions to task-oriented ones.
Our experiments in simulation, using a simplified human model, reveal that supportive actions reduce the interference between agents, especially in more difficult tasks, but also cause the robot to take longer to complete the task.
arXiv Detail & Related papers (2020-05-02T09:37:10Z) - Human Grasp Classification for Reactive Human-to-Robot Handovers [50.91803283297065]
We propose an approach for human-to-robot handovers in which the robot meets the human halfway.
We collect a human grasp dataset which covers typical ways of holding objects with various hand shapes and poses.
We present a planning and execution approach that takes the object from the human hand according to the detected grasp and hand position.
arXiv Detail & Related papers (2020-03-12T19:58:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.