Robot Cooking with Stir-fry: Bimanual Non-prehensile Manipulation of
Semi-fluid Objects
- URL: http://arxiv.org/abs/2205.05960v1
- Date: Thu, 12 May 2022 08:58:30 GMT
- Title: Robot Cooking with Stir-fry: Bimanual Non-prehensile Manipulation of
Semi-fluid Objects
- Authors: Junjia Liu, Yiting Chen, Zhipeng Dong, Shixiong Wang, Sylvain Calinon,
Miao Li, and Fei Chen
- Abstract summary: This letter describes an approach to achieve well-known Chinese cooking art stir-fry on a bimanual robot system.
We define a canonical stir-fry movement, then propose a decoupled framework for learning deformable object manipulation from human demonstration.
By adding visual feedback, our framework can adjust the movements automatically to achieve the desired stir-fry effect.
- Score: 13.847796949856457
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This letter describes an approach to achieve well-known Chinese cooking art
stir-fry on a bimanual robot system. Stir-fry requires a sequence of highly
dynamic coordinated movements, which is usually difficult to learn for a chef,
let alone transfer to robots. In this letter, we define a canonical stir-fry
movement, and then propose a decoupled framework for learning this deformable
object manipulation from human demonstration. First, the dual arms of the robot
are decoupled into different roles (a leader and follower) and learned with
classical and neural network-based methods separately, then the bimanual task
is transformed into a coordination problem. To obtain general bimanual
coordination, we secondly propose a Graph and Transformer based model --
Structured-Transformer, to capture the spatio-temporal relationship between
dual-arm movements. Finally, by adding visual feedback of content deformation,
our framework can adjust the movements automatically to achieve the desired
stir-fry effect. We verify the framework by a simulator and deploy it on a real
bimanual Panda robot system. The experimental results validate our framework
can realize the bimanual robot stir-fry motion and have the potential to extend
to other deformable objects with bimanual coordination.
Related papers
- Learning Bimanual Manipulation via Action Chunking and Inter-Arm Coordination with Transformers [4.119006369973485]
We focus on coordination and efficiency between both arms, particularly synchronized actions.
We propose a novel imitation learning architecture that predicts cooperative actions.
Our model demonstrated a high success rate for comparison and suggested a suitable architecture for the policy learning of bimanual manipulation.
arXiv Detail & Related papers (2025-03-18T05:20:34Z) - Play to the Score: Stage-Guided Dynamic Multi-Sensory Fusion for Robotic Manipulation [48.37976143987515]
Humans possess a remarkable talent for flexibly alternating to different senses when interacting with the environment.
We propose MS-Bot, a stage-guided dynamic multi-sensory fusion method with coarse-to-fine stage understanding.
We train a robot system equipped with visual, auditory, and tactile sensors to accomplish challenging robotic manipulation tasks.
arXiv Detail & Related papers (2024-08-02T16:20:56Z) - Track2Act: Predicting Point Tracks from Internet Videos enables Generalizable Robot Manipulation [65.46610405509338]
We seek to learn a generalizable goal-conditioned policy that enables zero-shot robot manipulation.
Our framework,Track2Act predicts tracks of how points in an image should move in future time-steps based on a goal.
We show that this approach of combining scalably learned track prediction with a residual policy enables diverse generalizable robot manipulation.
arXiv Detail & Related papers (2024-05-02T17:56:55Z) - Neural Style Transfer with Twin-Delayed DDPG for Shared Control of
Robotic Manipulators [15.947412070402878]
We propose a framework for transferring a set of styles to the motion of a robotic manipulator.
An autoencoder architecture extracts and defines the Content and the Style of the target robot motions.
The proposed Neural Policy Style Transfer TD3 (NPST3) alters the robot motion by introducing the trained style.
arXiv Detail & Related papers (2024-02-01T16:14:32Z) - BiRP: Learning Robot Generalized Bimanual Coordination using Relative
Parameterization Method on Human Demonstration [2.301921384458527]
We divide the main bimanual tasks in human daily activities into two types: leader-follower and synergistic coordination.
We propose a relative parameterization method to learn these types of coordination from human demonstration.
We believe that this easy-to-use bimanual learning demonstration (LfD) method has the potential to be used as a data plugin for robot large manipulation model training.
arXiv Detail & Related papers (2023-07-12T05:58:59Z) - Robot Learning with Sensorimotor Pre-training [98.7755895548928]
We present a self-supervised sensorimotor pre-training approach for robotics.
Our model, called RPT, is a Transformer that operates on sequences of sensorimotor tokens.
We find that sensorimotor pre-training consistently outperforms training from scratch, has favorable scaling properties, and enables transfer across different tasks, environments, and robots.
arXiv Detail & Related papers (2023-06-16T17:58:10Z) - IMoS: Intent-Driven Full-Body Motion Synthesis for Human-Object
Interactions [69.95820880360345]
We present the first framework to synthesize the full-body motion of virtual human characters with 3D objects placed within their reach.
Our system takes as input textual instructions specifying the objects and the associated intentions of the virtual characters.
We show that our synthesized full-body motions appear more realistic to the participants in more than 80% of scenarios.
arXiv Detail & Related papers (2022-12-14T23:59:24Z) - Synthesis and Execution of Communicative Robotic Movements with
Generative Adversarial Networks [59.098560311521034]
We focus on how to transfer on two different robotic platforms the same kinematics modulation that humans adopt when manipulating delicate objects.
We choose to modulate the velocity profile adopted by the robots' end-effector, inspired by what humans do when transporting objects with different characteristics.
We exploit a novel Generative Adversarial Network architecture, trained with human kinematics examples, to generalize over them and generate new and meaningful velocity profiles.
arXiv Detail & Related papers (2022-03-29T15:03:05Z) - A Transferable Legged Mobile Manipulation Framework Based on Disturbance
Predictive Control [15.044159090957292]
Legged mobile manipulation, where a quadruped robot is equipped with a robotic arm, can greatly enhance the performance of the robot.
We propose a unified framework disturbance predictive control where a reinforcement learning scheme with a latent dynamic adapter is embedded into our proposed low-level controller.
arXiv Detail & Related papers (2022-03-02T14:54:10Z) - V-MAO: Generative Modeling for Multi-Arm Manipulation of Articulated
Objects [51.79035249464852]
We present a framework for learning multi-arm manipulation of articulated objects.
Our framework includes a variational generative model that learns contact point distribution over object rigid parts for each robot arm.
arXiv Detail & Related papers (2021-11-07T02:31:09Z) - In-air Knotting of Rope using Dual-Arm Robot based on Deep Learning [8.365690203298966]
We report the successful execution of in-air knotting of rope using a dual-arm two-finger robot based on deep learning.
A manual description of appropriate robot motions corresponding to all object states is difficult to be prepared in advance.
We constructed a model that instructed the robot to perform bowknots and overhand knots based on two deep neural networks trained using the data gathered from its sensorimotor.
arXiv Detail & Related papers (2021-03-17T02:11:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.