Supportive Actions for Manipulation in Human-Robot Coworker Teams
- URL: http://arxiv.org/abs/2005.00769v1
- Date: Sat, 2 May 2020 09:37:10 GMT
- Title: Supportive Actions for Manipulation in Human-Robot Coworker Teams
- Authors: Shray Bansal, Rhys Newbury, Wesley Chan, Akansel Cosgun, Aimee Allen,
Dana Kuli\'c, Tom Drummond and Charles Isbell
- Abstract summary: We term actions that support interaction by reducing future interference with others as supportive robot actions.
We compare two robot modes in a shared table pick-and-place task: (1) Task-oriented: the robot only takes actions to further its own task objective and (2) Supportive: the robot sometimes prefers supportive actions to task-oriented ones.
Our experiments in simulation, using a simplified human model, reveal that supportive actions reduce the interference between agents, especially in more difficult tasks, but also cause the robot to take longer to complete the task.
- Score: 15.978389978586414
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The increasing presence of robots alongside humans, such as in human-robot
teams in manufacturing, gives rise to research questions about the kind of
behaviors people prefer in their robot counterparts. We term actions that
support interaction by reducing future interference with others as supportive
robot actions and investigate their utility in a co-located manipulation
scenario. We compare two robot modes in a shared table pick-and-place task: (1)
Task-oriented: the robot only takes actions to further its own task objective
and (2) Supportive: the robot sometimes prefers supportive actions to
task-oriented ones when they reduce future goal-conflicts. Our experiments in
simulation, using a simplified human model, reveal that supportive actions
reduce the interference between agents, especially in more difficult tasks, but
also cause the robot to take longer to complete the task. We implemented these
modes on a physical robot in a user study where a human and a robot perform
object placement on a shared table. Our results show that a supportive robot
was perceived as a more favorable coworker by the human and also reduced
interference with the human in the more difficult of two scenarios. However, it
also took longer to complete the task highlighting an interesting trade-off
between task-efficiency and human-preference that needs to be considered before
designing robot behavior for close-proximity manipulation scenarios.
Related papers
- Interactive Multi-Robot Flocking with Gesture Responsiveness and Musical Accompaniment [0.7659052547635159]
This work presents a compelling multi-robot task in which the main aim is to enthrall and interest.
In this task, the goal is for a human to be drawn to move alongside and participate in a dynamic, expressive robot flock.
Towards this aim, the research team created algorithms for robot movements and engaging interaction modes such as gestures and sound.
arXiv Detail & Related papers (2024-03-30T18:16:28Z) - HumanoidBench: Simulated Humanoid Benchmark for Whole-Body Locomotion and Manipulation [50.616995671367704]
We present a high-dimensional, simulated robot learning benchmark, HumanoidBench, featuring a humanoid robot equipped with dexterous hands.
Our findings reveal that state-of-the-art reinforcement learning algorithms struggle with most tasks, whereas a hierarchical learning approach achieves superior performance when supported by robust low-level policies.
arXiv Detail & Related papers (2024-03-15T17:45:44Z) - InteRACT: Transformer Models for Human Intent Prediction Conditioned on Robot Actions [7.574421886354134]
InteRACT architecture pre-trains a conditional intent prediction model on large human-human datasets and fine-tunes on a small human-robot dataset.
We evaluate on a set of real-world collaborative human-robot manipulation tasks and show that our conditional model improves over various marginal baselines.
arXiv Detail & Related papers (2023-11-21T19:15:17Z) - Habitat 3.0: A Co-Habitat for Humans, Avatars and Robots [119.55240471433302]
Habitat 3.0 is a simulation platform for studying collaborative human-robot tasks in home environments.
It addresses challenges in modeling complex deformable bodies and diversity in appearance and motion.
Human-in-the-loop infrastructure enables real human interaction with simulated robots via mouse/keyboard or a VR interface.
arXiv Detail & Related papers (2023-10-19T17:29:17Z) - ImitationNet: Unsupervised Human-to-Robot Motion Retargeting via Shared Latent Space [9.806227900768926]
This paper introduces a novel deep-learning approach for human-to-robot motion.
Our method does not require paired human-to-robot data, which facilitates its translation to new robots.
Our model outperforms existing works regarding human-to-robot similarity in terms of efficiency and precision.
arXiv Detail & Related papers (2023-09-11T08:55:04Z) - HERD: Continuous Human-to-Robot Evolution for Learning from Human
Demonstration [57.045140028275036]
We show that manipulation skills can be transferred from a human to a robot through the use of micro-evolutionary reinforcement learning.
We propose an algorithm for multi-dimensional evolution path searching that allows joint optimization of both the robot evolution path and the policy.
arXiv Detail & Related papers (2022-12-08T15:56:13Z) - CoGrasp: 6-DoF Grasp Generation for Human-Robot Collaboration [0.0]
We propose a novel, deep neural network-based method called CoGrasp that generates human-aware robot grasps.
In real robot experiments, our method achieves about 88% success rate in producing stable grasps.
Our approach enables a safe, natural, and socially-aware human-robot objects' co-grasping experience.
arXiv Detail & Related papers (2022-10-06T19:23:25Z) - Robots with Different Embodiments Can Express and Influence Carefulness
in Object Manipulation [104.5440430194206]
This work investigates the perception of object manipulations performed with a communicative intent by two robots.
We designed the robots' movements to communicate carefulness or not during the transportation of objects.
arXiv Detail & Related papers (2022-08-03T13:26:52Z) - Show Me What You Can Do: Capability Calibration on Reachable Workspace
for Human-Robot Collaboration [83.4081612443128]
We show that a short calibration using REMP can effectively bridge the gap between what a non-expert user thinks a robot can reach and the ground-truth.
We show that this calibration procedure not only results in better user perception, but also promotes more efficient human-robot collaborations.
arXiv Detail & Related papers (2021-03-06T09:14:30Z) - Human Grasp Classification for Reactive Human-to-Robot Handovers [50.91803283297065]
We propose an approach for human-to-robot handovers in which the robot meets the human halfway.
We collect a human grasp dataset which covers typical ways of holding objects with various hand shapes and poses.
We present a planning and execution approach that takes the object from the human hand according to the detected grasp and hand position.
arXiv Detail & Related papers (2020-03-12T19:58:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.