Reinforcement Learning with Videos: Combining Offline Observations with
Interaction
- URL: http://arxiv.org/abs/2011.06507v2
- Date: Thu, 4 Nov 2021 20:07:57 GMT
- Title: Reinforcement Learning with Videos: Combining Offline Observations with
Interaction
- Authors: Karl Schmeckpeper, Oleh Rybkin, Kostas Daniilidis, Sergey Levine,
Chelsea Finn
- Abstract summary: Reinforcement learning is a powerful framework for robots to acquire skills from experience.
Videos of humans are a readily available source of broad and interesting experiences.
We propose a framework for reinforcement learning with videos.
- Score: 151.73346150068866
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reinforcement learning is a powerful framework for robots to acquire skills
from experience, but often requires a substantial amount of online data
collection. As a result, it is difficult to collect sufficiently diverse
experiences that are needed for robots to generalize broadly. Videos of humans,
on the other hand, are a readily available source of broad and interesting
experiences. In this paper, we consider the question: can we perform
reinforcement learning directly on experience collected by humans? This problem
is particularly difficult, as such videos are not annotated with actions and
exhibit substantial visual domain shift relative to the robot's embodiment. To
address these challenges, we propose a framework for reinforcement learning
with videos (RLV). RLV learns a policy and value function using experience
collected by humans in combination with data collected by robots. In our
experiments, we find that RLV is able to leverage such videos to learn
challenging vision-based skills with less than half as many samples as RL
methods that learn from scratch.
Related papers
- VITAL: Visual Teleoperation to Enhance Robot Learning through Human-in-the-Loop Corrections [10.49712834719005]
We propose a low-cost visual teleoperation system for bimanual manipulation tasks, called VITAL.
Our approach leverages affordable hardware and visual processing techniques to collect demonstrations.
We enhance the generalizability and robustness of the learned policies by utilizing both real and simulated environments.
arXiv Detail & Related papers (2024-07-30T23:29:47Z) - Towards Generalist Robot Learning from Internet Video: A Survey [56.621902345314645]
We present an overview of the emerging field of Learning from Videos (LfV)
LfV aims to address the robotics data bottleneck by augmenting traditional robot data with large-scale internet video data.
We provide a review of current methods for extracting knowledge from large-scale internet video, addressing key challenges in LfV, and boosting downstream robot and reinforcement learning via the use of video data.
arXiv Detail & Related papers (2024-04-30T15:57:41Z) - ViSaRL: Visual Reinforcement Learning Guided by Human Saliency [6.969098096933547]
We introduce Visual Saliency-Guided Reinforcement Learning (ViSaRL)
Using ViSaRL to learn visual representations significantly improves the success rate, sample efficiency, and generalization of an RL agent.
We show that visual representations learned using ViSaRL are robust to various sources of visual perturbations including perceptual noise and scene variations.
arXiv Detail & Related papers (2024-03-16T14:52:26Z) - Learning by Watching: A Review of Video-based Learning Approaches for
Robot Manipulation [0.0]
Recent works have explored learning manipulation skills by passively watching abundant videos sourced online.
This survey reviews foundations such as video feature representation learning techniques, object affordance understanding, 3D hand/body modeling, and large-scale robot resources.
We discuss how learning only from observing large-scale human videos can enhance generalization and sample efficiency for robotic manipulation.
arXiv Detail & Related papers (2024-02-11T08:41:42Z) - Hindsight States: Blending Sim and Real Task Elements for Efficient
Reinforcement Learning [61.3506230781327]
In robotics, one approach to generate training data builds on simulations based on dynamics models derived from first principles.
Here, we leverage the imbalance in complexity of the dynamics to learn more sample-efficiently.
We validate our method on several challenging simulated tasks and demonstrate that it improves learning both alone and when combined with an existing hindsight algorithm.
arXiv Detail & Related papers (2023-03-03T21:55:04Z) - Towards a Sample Efficient Reinforcement Learning Pipeline for Vision
Based Robotics [0.0]
We study how to limit the time taken for training a robotic arm to reach a ball from scratch by assembling a pipeline as efficient as possible.
The pipeline is divided into two parts: the first one is to capture the relevant information from the RGB video with a Computer Vision algorithm.
The second one studies how to train faster a Deep Reinforcement Learning algorithm in order to make the robotic arm reach the target in front of him.
arXiv Detail & Related papers (2021-05-20T13:13:01Z) - Actionable Models: Unsupervised Offline Reinforcement Learning of
Robotic Skills [93.12417203541948]
We propose the objective of learning a functional understanding of the environment by learning to reach any goal state in a given dataset.
We find that our method can operate on high-dimensional camera images and learn a variety of skills on real robots that generalize to previously unseen scenes and objects.
arXiv Detail & Related papers (2021-04-15T20:10:11Z) - Learning Generalizable Robotic Reward Functions from "In-The-Wild" Human
Videos [59.58105314783289]
Domain-agnostic Video Discriminator (DVD) learns multitask reward functions by training a discriminator to classify whether two videos are performing the same task.
DVD can generalize by virtue of learning from a small amount of robot data with a broad dataset of human videos.
DVD can be combined with visual model predictive control to solve robotic manipulation tasks on a real WidowX200 robot in an unseen environment from a single human demo.
arXiv Detail & Related papers (2021-03-31T05:25:05Z) - Visual Imitation Made Easy [102.36509665008732]
We present an alternate interface for imitation that simplifies the data collection process while allowing for easy transfer to robots.
We use commercially available reacher-grabber assistive tools both as a data collection device and as the robot's end-effector.
We experimentally evaluate on two challenging tasks: non-prehensile pushing and prehensile stacking, with 1000 diverse demonstrations for each task.
arXiv Detail & Related papers (2020-08-11T17:58:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.