AR2-D2:Training a Robot Without a Robot
- URL: http://arxiv.org/abs/2306.13818v1
- Date: Fri, 23 Jun 2023 23:54:26 GMT
- Title: AR2-D2:Training a Robot Without a Robot
- Authors: Jiafei Duan, Yi Ru Wang, Mohit Shridhar, Dieter Fox, Ranjay Krishna
- Abstract summary: We introduce AR2-D2, a system for collecting demonstrations which does not require people with specialized training.
AR2-D2 is a framework in the form of an iOS app that people can use to record a video of themselves manipulating any object.
We show that data collected via our system enables the training of behavior cloning agents in manipulating real objects.
- Score: 53.10633639596096
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Diligently gathered human demonstrations serve as the unsung heroes
empowering the progression of robot learning. Today, demonstrations are
collected by training people to use specialized controllers, which
(tele-)operate robots to manipulate a small number of objects. By contrast, we
introduce AR2-D2: a system for collecting demonstrations which (1) does not
require people with specialized training, (2) does not require any real robots
during data collection, and therefore, (3) enables manipulation of diverse
objects with a real robot. AR2-D2 is a framework in the form of an iOS app that
people can use to record a video of themselves manipulating any object while
simultaneously capturing essential data modalities for training a real robot.
We show that data collected via our system enables the training of behavior
cloning agents in manipulating real objects. Our experiments further show that
training with our AR data is as effective as training with real-world robot
demonstrations. Moreover, our user study indicates that users find AR2-D2
intuitive to use and require no training in contrast to four other frequently
employed methods for collecting robot demonstrations.
Related papers
- ARCap: Collecting High-quality Human Demonstrations for Robot Learning with Augmented Reality Feedback [21.9704438641606]
We propose ARCap, a portable data collection system that provides visual feedback through augmented reality (AR) and haptic warnings to guide users in collecting high-quality demonstrations.
With data collected from ARCap, robots can perform challenging tasks, such as manipulation in cluttered environments and long-horizon cross-embodiment manipulation.
arXiv Detail & Related papers (2024-10-11T02:30:46Z) - Augmented Reality Demonstrations for Scalable Robot Imitation Learning [25.026589453708347]
This paper presents an innovative solution: an Augmented Reality (AR)-assisted framework for demonstration collection.
We empower non-roboticist users to produce demonstrations for robot IL using devices like the HoloLens 2.
We validate our approach with experiments on three classical robotics tasks: reach, push, and pick-and-place.
arXiv Detail & Related papers (2024-03-20T18:30:12Z) - Giving Robots a Hand: Learning Generalizable Manipulation with
Eye-in-Hand Human Video Demonstrations [66.47064743686953]
Eye-in-hand cameras have shown promise in enabling greater sample efficiency and generalization in vision-based robotic manipulation.
Videos of humans performing tasks, on the other hand, are much cheaper to collect since they eliminate the need for expertise in robotic teleoperation.
In this work, we augment narrow robotic imitation datasets with broad unlabeled human video demonstrations to greatly enhance the generalization of eye-in-hand visuomotor policies.
arXiv Detail & Related papers (2023-07-12T07:04:53Z) - Learning Video-Conditioned Policies for Unseen Manipulation Tasks [83.2240629060453]
Video-conditioned Policy learning maps human demonstrations of previously unseen tasks to robot manipulation skills.
We learn our policy to generate appropriate actions given current scene observations and a video of the target task.
We validate our approach on a set of challenging multi-task robot manipulation environments and outperform state of the art.
arXiv Detail & Related papers (2023-05-10T16:25:42Z) - Self-Improving Robots: End-to-End Autonomous Visuomotor Reinforcement
Learning [54.636562516974884]
In imitation and reinforcement learning, the cost of human supervision limits the amount of data that robots can be trained on.
In this work, we propose MEDAL++, a novel design for self-improving robotic systems.
The robot autonomously practices the task by learning to both do and undo the task, simultaneously inferring the reward function from the demonstrations.
arXiv Detail & Related papers (2023-03-02T18:51:38Z) - Scaling Robot Learning with Semantically Imagined Experience [21.361979238427722]
Recent advances in robot learning have shown promise in enabling robots to perform manipulation tasks.
One of the key contributing factors to this progress is the scale of robot data used to train the models.
We propose an alternative route and leverage text-to-image foundation models widely used in computer vision and natural language processing.
arXiv Detail & Related papers (2023-02-22T18:47:51Z) - Learning Reward Functions for Robotic Manipulation by Observing Humans [92.30657414416527]
We use unlabeled videos of humans solving a wide range of manipulation tasks to learn a task-agnostic reward function for robotic manipulation policies.
The learned rewards are based on distances to a goal in an embedding space learned using a time-contrastive objective.
arXiv Detail & Related papers (2022-11-16T16:26:48Z) - From One Hand to Multiple Hands: Imitation Learning for Dexterous
Manipulation from Single-Camera Teleoperation [26.738893736520364]
We introduce a novel single-camera teleoperation system to collect the 3D demonstrations efficiently with only an iPad and a computer.
We construct a customized robot hand for each user in the physical simulator, which is a manipulator resembling the same kinematics structure and shape of the operator's hand.
With imitation learning using our data, we show large improvement over baselines with multiple complex manipulation tasks.
arXiv Detail & Related papers (2022-04-26T17:59:51Z) - Lifelong Robotic Reinforcement Learning by Retaining Experiences [61.79346922421323]
Many multi-task reinforcement learning efforts assume the robot can collect data from all tasks at all times.
In this work, we study a practical sequential multi-task RL problem motivated by the practical constraints of physical robotic systems.
We derive an approach that effectively leverages the data and policies learned for previous tasks to cumulatively grow the robot's skill-set.
arXiv Detail & Related papers (2021-09-19T18:00:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.