One-shot Imitation Learning via Interaction Warping
- URL: http://arxiv.org/abs/2306.12392v2
- Date: Sat, 4 Nov 2023 21:32:40 GMT
- Title: One-shot Imitation Learning via Interaction Warping
- Authors: Ondrej Biza, Skye Thompson, Kishore Reddy Pagidi, Abhinav Kumar, Elise
van der Pol, Robin Walters, Thomas Kipf, Jan-Willem van de Meent, Lawson L.S.
Wong, Robert Platt
- Abstract summary: We propose a new method, Interaction Warping, for learning SE(3) robotic manipulation policies from a single demonstration.
We infer the 3D mesh of each object in the environment using shape warping, a technique for aligning point clouds across object instances.
We show successful one-shot imitation learning on three simulated and real-world object re-arrangement tasks.
- Score: 32.5466340846254
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Imitation learning of robot policies from few demonstrations is crucial in
open-ended applications. We propose a new method, Interaction Warping, for
learning SE(3) robotic manipulation policies from a single demonstration. We
infer the 3D mesh of each object in the environment using shape warping, a
technique for aligning point clouds across object instances. Then, we represent
manipulation actions as keypoints on objects, which can be warped with the
shape of the object. We show successful one-shot imitation learning on three
simulated and real-world object re-arrangement tasks. We also demonstrate the
ability of our method to predict object meshes and robot grasps in the wild.
Related papers
- Vision-based Manipulation from Single Human Video with Open-World Object Graphs [58.23098483464538]
We present an object-centric approach to empower robots to learn vision-based manipulation skills from human videos.
We introduce ORION, an algorithm that tackles the problem by extracting an object-centric manipulation plan from a single RGB-D video.
arXiv Detail & Related papers (2024-05-30T17:56:54Z) - Track2Act: Predicting Point Tracks from Internet Videos enables Generalizable Robot Manipulation [65.46610405509338]
We seek to learn a generalizable goal-conditioned policy that enables zero-shot robot manipulation.
Our framework,Track2Act predicts tracks of how points in an image should move in future time-steps based on a goal.
We show that this approach of combining scalably learned track prediction with a residual policy enables diverse generalizable robot manipulation.
arXiv Detail & Related papers (2024-05-02T17:56:55Z) - SUGAR: Pre-training 3D Visual Representations for Robotics [85.55534363501131]
We introduce a novel 3D pre-training framework for robotics named SUGAR.
SUGAR captures semantic, geometric and affordance properties of objects through 3D point clouds.
We show that SUGAR's 3D representation outperforms state-of-the-art 2D and 3D representations.
arXiv Detail & Related papers (2024-04-01T21:23:03Z) - DITTO: Demonstration Imitation by Trajectory Transformation [31.930923345163087]
In this work, we address the problem of one-shot imitation from a single human demonstration, given by an RGB-D video recording.
We propose a two-stage process. In the first stage we extract the demonstration trajectory offline. This entails segmenting manipulated objects and determining their relative motion in relation to secondary objects such as containers.
In the online trajectory generation stage, we first re-detect all objects, then warp the demonstration trajectory to the current scene and execute it on the robot.
arXiv Detail & Related papers (2024-03-22T13:46:51Z) - DefGoalNet: Contextual Goal Learning from Demonstrations For Deformable
Object Manipulation [11.484820908345563]
We develop a novel neural network DefGoalNet to learn deformable object goal shapes.
We demonstrate our method's effectiveness on various robotic tasks, both in simulation and on a physical robot.
arXiv Detail & Related papers (2023-09-25T18:54:32Z) - Cross-Domain Transfer via Semantic Skill Imitation [49.83150463391275]
We propose an approach for semantic imitation, which uses demonstrations from a source domain, e.g. human videos, to accelerate reinforcement learning (RL)
Instead of imitating low-level actions like joint velocities, our approach imitates the sequence of demonstrated semantic skills like "opening the microwave" or "turning on the stove"
arXiv Detail & Related papers (2022-12-14T18:46:14Z) - Learning Generalizable Dexterous Manipulation from Human Grasp
Affordance [11.060931225148936]
Dexterous manipulation with a multi-finger hand is one of the most challenging problems in robotics.
Recent progress in imitation learning has largely improved the sample efficiency compared to Reinforcement Learning.
We propose to learn dexterous manipulation using large-scale demonstrations with diverse 3D objects in a category.
arXiv Detail & Related papers (2022-04-05T16:26:22Z) - V-MAO: Generative Modeling for Multi-Arm Manipulation of Articulated
Objects [51.79035249464852]
We present a framework for learning multi-arm manipulation of articulated objects.
Our framework includes a variational generative model that learns contact point distribution over object rigid parts for each robot arm.
arXiv Detail & Related papers (2021-11-07T02:31:09Z) - Coarse-to-Fine Imitation Learning: Robot Manipulation from a Single
Demonstration [8.57914821832517]
We introduce a simple new method for visual imitation learning, which allows a novel robot manipulation task to be learned from a single human demonstration.
Our method models imitation learning as a state estimation problem, with the state defined as the end-effector's pose.
At test time, the end-effector moves to the estimated state through a linear path, at which point the original demonstration's end-effector velocities are simply replayed.
arXiv Detail & Related papers (2021-05-13T16:36:55Z) - Learning Object Manipulation Skills via Approximate State Estimation
from Real Videos [47.958512470724926]
Humans are adept at learning new tasks by watching a few instructional videos.
On the other hand, robots that learn new actions either require a lot of effort through trial and error, or use expert demonstrations that are challenging to obtain.
In this paper, we explore a method that facilitates learning object manipulation skills directly from videos.
arXiv Detail & Related papers (2020-11-13T08:53:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.