Learning Fabric Manipulation in the Real World with Human Videos
- URL: http://arxiv.org/abs/2211.02832v1
- Date: Sat, 5 Nov 2022 07:09:15 GMT
- Title: Learning Fabric Manipulation in the Real World with Human Videos
- Authors: Robert Lee, Jad Abou-Chakra, Fangyi Zhang, Peter Corke
- Abstract summary: Fabric manipulation is a long-standing challenge in robotics due to the enormous state space and complex dynamics.
Most prior methods rely heavily on simulation, which is still limited by the large sim-to-real gap of deformable objects.
A promising alternative is to learn fabric manipulation directly from watching humans perform the task.
- Score: 10.608723220309678
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fabric manipulation is a long-standing challenge in robotics due to the
enormous state space and complex dynamics. Learning approaches stand out as
promising for this domain as they allow us to learn behaviours directly from
data. Most prior methods however rely heavily on simulation, which is still
limited by the large sim-to-real gap of deformable objects or rely on large
datasets. A promising alternative is to learn fabric manipulation directly from
watching humans perform the task. In this work, we explore how demonstrations
for fabric manipulation tasks can be collected directly by human hands,
providing an extremely natural and fast data collection pipeline. Then, using
only a handful of such demonstrations, we show how a sample-efficient
pick-and-place policy can be learned and deployed on a real robot, without any
robot data collection at all. We demonstrate our approach on a fabric folding
task, showing that our policy can reliably reach folded states from crumpled
initial configurations.
Related papers
- Manipulate-Anything: Automating Real-World Robots using Vision-Language Models [47.16659229389889]
We propose Manipulate-Anything, a scalable automated generation method for real-world robotic manipulation.
Manipulate-Anything can operate in real-world environments without any privileged state information, hand-designed skills, and can manipulate any static object.
arXiv Detail & Related papers (2024-06-27T06:12:01Z) - Scaling Robot Learning with Semantically Imagined Experience [21.361979238427722]
Recent advances in robot learning have shown promise in enabling robots to perform manipulation tasks.
One of the key contributing factors to this progress is the scale of robot data used to train the models.
We propose an alternative route and leverage text-to-image foundation models widely used in computer vision and natural language processing.
arXiv Detail & Related papers (2023-02-22T18:47:51Z) - Learning Reward Functions for Robotic Manipulation by Observing Humans [92.30657414416527]
We use unlabeled videos of humans solving a wide range of manipulation tasks to learn a task-agnostic reward function for robotic manipulation policies.
The learned rewards are based on distances to a goal in an embedding space learned using a time-contrastive objective.
arXiv Detail & Related papers (2022-11-16T16:26:48Z) - DexTransfer: Real World Multi-fingered Dexterous Grasping with Minimal
Human Demonstrations [51.87067543670535]
We propose a robot-learning system that can take a small number of human demonstrations and learn to grasp unseen object poses.
We train a dexterous grasping policy that takes the point clouds of the object as input and predicts continuous actions to grasp objects from different initial robot states.
The policy learned from our dataset can generalize well on unseen object poses in both simulation and the real world.
arXiv Detail & Related papers (2022-09-28T17:51:49Z) - Human-to-Robot Imitation in the Wild [50.49660984318492]
We propose an efficient one-shot robot learning algorithm, centered around learning from a third-person perspective.
We show one-shot generalization and success in real-world settings, including 20 different manipulation tasks in the wild.
arXiv Detail & Related papers (2022-07-19T17:59:59Z) - What Matters in Learning from Offline Human Demonstrations for Robot
Manipulation [64.43440450794495]
We conduct an extensive study of six offline learning algorithms for robot manipulation.
Our study analyzes the most critical challenges when learning from offline human data.
We highlight opportunities for learning from human datasets.
arXiv Detail & Related papers (2021-08-06T20:48:30Z) - A Framework for Efficient Robotic Manipulation [79.10407063260473]
We show that a single robotic arm can learn sparse-reward manipulation policies from pixels.
We show that, given only 10 demonstrations, a single robotic arm can learn sparse-reward manipulation policies from pixels.
arXiv Detail & Related papers (2020-12-14T22:18:39Z) - Learning Object Manipulation Skills via Approximate State Estimation
from Real Videos [47.958512470724926]
Humans are adept at learning new tasks by watching a few instructional videos.
On the other hand, robots that learn new actions either require a lot of effort through trial and error, or use expert demonstrations that are challenging to obtain.
In this paper, we explore a method that facilitates learning object manipulation skills directly from videos.
arXiv Detail & Related papers (2020-11-13T08:53:47Z) - Visual Imitation Made Easy [102.36509665008732]
We present an alternate interface for imitation that simplifies the data collection process while allowing for easy transfer to robots.
We use commercially available reacher-grabber assistive tools both as a data collection device and as the robot's end-effector.
We experimentally evaluate on two challenging tasks: non-prehensile pushing and prehensile stacking, with 1000 diverse demonstrations for each task.
arXiv Detail & Related papers (2020-08-11T17:58:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.