Inducing Structure in Reward Learning by Learning Features
- URL: http://arxiv.org/abs/2201.07082v1
- Date: Tue, 18 Jan 2022 16:02:29 GMT
- Title: Inducing Structure in Reward Learning by Learning Features
- Authors: Andreea Bobu, Marius Wiggert, Claire Tomlin, Anca D. Dragan
- Abstract summary: We introduce a novel type of human input for teaching features and an algorithm that utilizes it to learn complex features from the raw state space.
We demonstrate our method in settings where all features have to be learned from scratch, as well as where some of the features are known.
- Score: 31.413656752926208
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reward learning enables robots to learn adaptable behaviors from human input.
Traditional methods model the reward as a linear function of hand-crafted
features, but that requires specifying all the relevant features a priori,
which is impossible for real-world tasks. To get around this issue, recent deep
Inverse Reinforcement Learning (IRL) methods learn rewards directly from the
raw state but this is challenging because the robot has to implicitly learn the
features that are important and how to combine them, simultaneously. Instead,
we propose a divide and conquer approach: focus human input specifically on
learning the features separately, and only then learn how to combine them into
a reward. We introduce a novel type of human input for teaching features and an
algorithm that utilizes it to learn complex features from the raw state space.
The robot can then learn how to combine them into a reward using
demonstrations, corrections, or other reward learning frameworks. We
demonstrate our method in settings where all features have to be learned from
scratch, as well as where some of the features are known. By first focusing
human input specifically on the feature(s), our method decreases sample
complexity and improves generalization of the learned reward over a deepIRL
baseline. We show this in experiments with a physical 7DOF robot manipulator,
as well as in a user study conducted in a simulated environment.
Related papers
- Adaptive Language-Guided Abstraction from Contrastive Explanations [53.48583372522492]
It is necessary to determine which features of the environment are relevant before determining how these features should be used to compute reward.
End-to-end methods for joint feature and reward learning often yield brittle reward functions that are sensitive to spurious state features.
This paper describes a method named ALGAE which alternates between using language models to iteratively identify human-meaningful features.
arXiv Detail & Related papers (2024-09-12T16:51:58Z) - Offline Imitation Learning Through Graph Search and Retrieval [57.57306578140857]
Imitation learning is a powerful machine learning algorithm for a robot to acquire manipulation skills.
We propose GSR, a simple yet effective algorithm that learns from suboptimal demonstrations through Graph Search and Retrieval.
GSR can achieve a 10% to 30% higher success rate and over 30% higher proficiency compared to baselines.
arXiv Detail & Related papers (2024-07-22T06:12:21Z) - Learning Reward Functions for Robotic Manipulation by Observing Humans [92.30657414416527]
We use unlabeled videos of humans solving a wide range of manipulation tasks to learn a task-agnostic reward function for robotic manipulation policies.
The learned rewards are based on distances to a goal in an embedding space learned using a time-contrastive objective.
arXiv Detail & Related papers (2022-11-16T16:26:48Z) - Human-to-Robot Imitation in the Wild [50.49660984318492]
We propose an efficient one-shot robot learning algorithm, centered around learning from a third-person perspective.
We show one-shot generalization and success in real-world settings, including 20 different manipulation tasks in the wild.
arXiv Detail & Related papers (2022-07-19T17:59:59Z) - Hierarchical Affordance Discovery using Intrinsic Motivation [69.9674326582747]
We propose an algorithm using intrinsic motivation to guide the learning of affordances for a mobile robot.
This algorithm is capable to autonomously discover, learn and adapt interrelated affordances without pre-programmed actions.
Once learned, these affordances may be used by the algorithm to plan sequences of actions in order to perform tasks of various difficulties.
arXiv Detail & Related papers (2020-09-23T07:18:21Z) - Feature Expansive Reward Learning: Rethinking Human Input [31.413656752926208]
We introduce a new type of human input in which the person guides the robot from states where the feature being taught is highly expressed to states where it is not.
We propose an algorithm for learning the feature from the raw state space and integrating it into the reward function.
arXiv Detail & Related papers (2020-06-23T17:59:34Z) - Emergent Real-World Robotic Skills via Unsupervised Off-Policy
Reinforcement Learning [81.12201426668894]
We develop efficient reinforcement learning methods that acquire diverse skills without any reward function, and then repurpose these skills for downstream tasks.
We show that our proposed algorithm provides substantial improvement in learning efficiency, making reward-free real-world training feasible.
We also demonstrate that the learned skills can be composed using model predictive control for goal-oriented navigation, without any additional training.
arXiv Detail & Related papers (2020-04-27T17:38:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.