BiRP: Learning Robot Generalized Bimanual Coordination using Relative
Parameterization Method on Human Demonstration
- URL: http://arxiv.org/abs/2307.05933v1
- Date: Wed, 12 Jul 2023 05:58:59 GMT
- Title: BiRP: Learning Robot Generalized Bimanual Coordination using Relative
Parameterization Method on Human Demonstration
- Authors: Junjia Liu, Hengyi Sim, Chenzui Li, and Fei Chen
- Abstract summary: We divide the main bimanual tasks in human daily activities into two types: leader-follower and synergistic coordination.
We propose a relative parameterization method to learn these types of coordination from human demonstration.
We believe that this easy-to-use bimanual learning demonstration (LfD) method has the potential to be used as a data plugin for robot large manipulation model training.
- Score: 2.301921384458527
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Human bimanual manipulation can perform more complex tasks than a simple
combination of two single arms, which is credited to the spatio-temporal
coordination between the arms. However, the description of bimanual
coordination is still an open topic in robotics. This makes it difficult to
give an explainable coordination paradigm, let alone applied to robotics. In
this work, we divide the main bimanual tasks in human daily activities into two
types: leader-follower and synergistic coordination. Then we propose a relative
parameterization method to learn these types of coordination from human
demonstration. It represents coordination as Gaussian mixture models from
bimanual demonstration to describe the change in the importance of coordination
throughout the motions by probability. The learned coordinated representation
can be generalized to new task parameters while ensuring spatio-temporal
coordination. We demonstrate the method using synthetic motions and human
demonstration data and deploy it to a humanoid robot to perform a generalized
bimanual coordination motion. We believe that this easy-to-use bimanual
learning from demonstration (LfD) method has the potential to be used as a data
augmentation plugin for robot large manipulation model training. The
corresponding codes are open-sourced in https://github.com/Skylark0924/Rofunc.
Related papers
- Human-Agent Joint Learning for Efficient Robot Manipulation Skill Acquisition [48.65867987106428]
We introduce a novel system for joint learning between human operators and robots.
It enables human operators to share control of a robot end-effector with a learned assistive agent.
It also allows the human operator to adjust the control ratio to achieve a trade-off between manual and automated control.
arXiv Detail & Related papers (2024-06-29T03:37:29Z) - Towards Generalizable Zero-Shot Manipulation via Translating Human
Interaction Plans [58.27029676638521]
We show how passive human videos can serve as a rich source of data for learning such generalist robots.
We learn a human plan predictor that, given a current image of a scene and a goal image, predicts the future hand and object configurations.
We show that our learned system can perform over 16 manipulation skills that generalize to 40 objects.
arXiv Detail & Related papers (2023-12-01T18:54:12Z) - Learning Multimodal Latent Dynamics for Human-Robot Interaction [19.803547418450236]
This article presents a method for learning well-coordinated Human-Robot Interaction (HRI) from Human-Human Interactions (HHI)
We devise a hybrid approach using Hidden Markov Models (HMMs) as the latent space priors for a Variational Autoencoder to model a joint distribution over the interacting agents.
We find that Users perceive our method as more human-like, timely, and accurate and rank our method with a higher degree of preference over other baselines.
arXiv Detail & Related papers (2023-11-27T23:56:59Z) - ImitationNet: Unsupervised Human-to-Robot Motion Retargeting via Shared Latent Space [9.806227900768926]
This paper introduces a novel deep-learning approach for human-to-robot motion.
Our method does not require paired human-to-robot data, which facilitates its translation to new robots.
Our model outperforms existing works regarding human-to-robot similarity in terms of efficiency and precision.
arXiv Detail & Related papers (2023-09-11T08:55:04Z) - Learning Reward Functions for Robotic Manipulation by Observing Humans [92.30657414416527]
We use unlabeled videos of humans solving a wide range of manipulation tasks to learn a task-agnostic reward function for robotic manipulation policies.
The learned rewards are based on distances to a goal in an embedding space learned using a time-contrastive objective.
arXiv Detail & Related papers (2022-11-16T16:26:48Z) - Robot Cooking with Stir-fry: Bimanual Non-prehensile Manipulation of
Semi-fluid Objects [13.847796949856457]
This letter describes an approach to achieve well-known Chinese cooking art stir-fry on a bimanual robot system.
We define a canonical stir-fry movement, then propose a decoupled framework for learning deformable object manipulation from human demonstration.
By adding visual feedback, our framework can adjust the movements automatically to achieve the desired stir-fry effect.
arXiv Detail & Related papers (2022-05-12T08:58:30Z) - Synthesis and Execution of Communicative Robotic Movements with
Generative Adversarial Networks [59.098560311521034]
We focus on how to transfer on two different robotic platforms the same kinematics modulation that humans adopt when manipulating delicate objects.
We choose to modulate the velocity profile adopted by the robots' end-effector, inspired by what humans do when transporting objects with different characteristics.
We exploit a novel Generative Adversarial Network architecture, trained with human kinematics examples, to generalize over them and generate new and meaningful velocity profiles.
arXiv Detail & Related papers (2022-03-29T15:03:05Z) - Learning Bipedal Robot Locomotion from Human Movement [0.791553652441325]
We present a reinforcement learning based method for teaching a real world bipedal robot to perform movements directly from motion capture data.
Our method seamlessly transitions from training in a simulation environment to executing on a physical robot.
We demonstrate our method on an internally developed humanoid robot with movements ranging from a dynamic walk cycle to complex balancing and waving.
arXiv Detail & Related papers (2021-05-26T00:49:37Z) - Hierarchical Affordance Discovery using Intrinsic Motivation [69.9674326582747]
We propose an algorithm using intrinsic motivation to guide the learning of affordances for a mobile robot.
This algorithm is capable to autonomously discover, learn and adapt interrelated affordances without pre-programmed actions.
Once learned, these affordances may be used by the algorithm to plan sequences of actions in order to perform tasks of various difficulties.
arXiv Detail & Related papers (2020-09-23T07:18:21Z) - Learning Predictive Models From Observation and Interaction [137.77887825854768]
Learning predictive models from interaction with the world allows an agent, such as a robot, to learn about how the world works.
However, learning a model that captures the dynamics of complex skills represents a major challenge.
We propose a method to augment the training set with observational data of other agents, such as humans.
arXiv Detail & Related papers (2019-12-30T01:10:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.