BiRP: Learning Robot Generalized Bimanual Coordination using Relative
Parameterization Method on Human Demonstration
- URL: http://arxiv.org/abs/2307.05933v1
- Date: Wed, 12 Jul 2023 05:58:59 GMT
- Title: BiRP: Learning Robot Generalized Bimanual Coordination using Relative
Parameterization Method on Human Demonstration
- Authors: Junjia Liu, Hengyi Sim, Chenzui Li, and Fei Chen
- Abstract summary: We divide the main bimanual tasks in human daily activities into two types: leader-follower and synergistic coordination.
We propose a relative parameterization method to learn these types of coordination from human demonstration.
We believe that this easy-to-use bimanual learning demonstration (LfD) method has the potential to be used as a data plugin for robot large manipulation model training.
- Score: 2.301921384458527
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Human bimanual manipulation can perform more complex tasks than a simple
combination of two single arms, which is credited to the spatio-temporal
coordination between the arms. However, the description of bimanual
coordination is still an open topic in robotics. This makes it difficult to
give an explainable coordination paradigm, let alone applied to robotics. In
this work, we divide the main bimanual tasks in human daily activities into two
types: leader-follower and synergistic coordination. Then we propose a relative
parameterization method to learn these types of coordination from human
demonstration. It represents coordination as Gaussian mixture models from
bimanual demonstration to describe the change in the importance of coordination
throughout the motions by probability. The learned coordinated representation
can be generalized to new task parameters while ensuring spatio-temporal
coordination. We demonstrate the method using synthetic motions and human
demonstration data and deploy it to a humanoid robot to perform a generalized
bimanual coordination motion. We believe that this easy-to-use bimanual
learning from demonstration (LfD) method has the potential to be used as a data
augmentation plugin for robot large manipulation model training. The
corresponding codes are open-sourced in https://github.com/Skylark0924/Rofunc.
Related papers
- Learning Bimanual Manipulation via Action Chunking and Inter-Arm Coordination with Transformers [4.119006369973485]
We focus on coordination and efficiency between both arms, particularly synchronized actions.
We propose a novel imitation learning architecture that predicts cooperative actions.
Our model demonstrated a high success rate for comparison and suggested a suitable architecture for the policy learning of bimanual manipulation.
arXiv Detail & Related papers (2025-03-18T05:20:34Z) - DIRIGENt: End-To-End Robotic Imitation of Human Demonstrations Based on a Diffusion Model [16.26334759935617]
We introduce DIRIGENt, a novel end-to-end diffusion approach to generate joint values from observing human demonstrations.
We create a dataset in which humans imitate a robot and then use this collected data to train a diffusion model that enables a robot to imitate humans.
arXiv Detail & Related papers (2025-01-28T09:05:03Z) - Visual IRL for Human-Like Robotic Manipulation [5.167226775583172]
We present a novel method for collaborative robots (cobots) to learn manipulation tasks and perform them in a human-like manner.
Our method falls under the learn-from-observation (LfO) paradigm, where robots learn to perform tasks by observing human actions.
We evaluate the performance of this approach on two different realistic manipulation tasks.
arXiv Detail & Related papers (2024-12-16T01:23:13Z) - Human-Agent Joint Learning for Efficient Robot Manipulation Skill Acquisition [48.65867987106428]
We introduce a novel system for joint learning between human operators and robots.
It enables human operators to share control of a robot end-effector with a learned assistive agent.
It reduces the need for human adaptation while ensuring the collected data is of sufficient quality for downstream tasks.
arXiv Detail & Related papers (2024-06-29T03:37:29Z) - Towards Generalizable Zero-Shot Manipulation via Translating Human
Interaction Plans [58.27029676638521]
We show how passive human videos can serve as a rich source of data for learning such generalist robots.
We learn a human plan predictor that, given a current image of a scene and a goal image, predicts the future hand and object configurations.
We show that our learned system can perform over 16 manipulation skills that generalize to 40 objects.
arXiv Detail & Related papers (2023-12-01T18:54:12Z) - Learning Multimodal Latent Dynamics for Human-Robot Interaction [19.803547418450236]
This article presents a method for learning well-coordinated Human-Robot Interaction (HRI) from Human-Human Interactions (HHI)
We devise a hybrid approach using Hidden Markov Models (HMMs) as the latent space priors for a Variational Autoencoder to model a joint distribution over the interacting agents.
We find that Users perceive our method as more human-like, timely, and accurate and rank our method with a higher degree of preference over other baselines.
arXiv Detail & Related papers (2023-11-27T23:56:59Z) - InterControl: Zero-shot Human Interaction Generation by Controlling Every Joint [67.6297384588837]
We introduce a novel controllable motion generation method, InterControl, to encourage the synthesized motions maintaining the desired distance between joint pairs.
We demonstrate that the distance between joint pairs for human-wise interactions can be generated using an off-the-shelf Large Language Model.
arXiv Detail & Related papers (2023-11-27T14:32:33Z) - ImitationNet: Unsupervised Human-to-Robot Motion Retargeting via Shared Latent Space [9.806227900768926]
This paper introduces a novel deep-learning approach for human-to-robot motion.
Our method does not require paired human-to-robot data, which facilitates its translation to new robots.
Our model outperforms existing works regarding human-to-robot similarity in terms of efficiency and precision.
arXiv Detail & Related papers (2023-09-11T08:55:04Z) - Learning Reward Functions for Robotic Manipulation by Observing Humans [92.30657414416527]
We use unlabeled videos of humans solving a wide range of manipulation tasks to learn a task-agnostic reward function for robotic manipulation policies.
The learned rewards are based on distances to a goal in an embedding space learned using a time-contrastive objective.
arXiv Detail & Related papers (2022-11-16T16:26:48Z) - Robot Cooking with Stir-fry: Bimanual Non-prehensile Manipulation of
Semi-fluid Objects [13.847796949856457]
This letter describes an approach to achieve well-known Chinese cooking art stir-fry on a bimanual robot system.
We define a canonical stir-fry movement, then propose a decoupled framework for learning deformable object manipulation from human demonstration.
By adding visual feedback, our framework can adjust the movements automatically to achieve the desired stir-fry effect.
arXiv Detail & Related papers (2022-05-12T08:58:30Z) - Synthesis and Execution of Communicative Robotic Movements with
Generative Adversarial Networks [59.098560311521034]
We focus on how to transfer on two different robotic platforms the same kinematics modulation that humans adopt when manipulating delicate objects.
We choose to modulate the velocity profile adopted by the robots' end-effector, inspired by what humans do when transporting objects with different characteristics.
We exploit a novel Generative Adversarial Network architecture, trained with human kinematics examples, to generalize over them and generate new and meaningful velocity profiles.
arXiv Detail & Related papers (2022-03-29T15:03:05Z) - Learning Predictive Models From Observation and Interaction [137.77887825854768]
Learning predictive models from interaction with the world allows an agent, such as a robot, to learn about how the world works.
However, learning a model that captures the dynamics of complex skills represents a major challenge.
We propose a method to augment the training set with observational data of other agents, such as humans.
arXiv Detail & Related papers (2019-12-30T01:10:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.