Breadcrumbs to the Goal: Goal-Conditioned Exploration from
Human-in-the-Loop Feedback
- URL: http://arxiv.org/abs/2307.11049v1
- Date: Thu, 20 Jul 2023 17:30:37 GMT
- Title: Breadcrumbs to the Goal: Goal-Conditioned Exploration from
Human-in-the-Loop Feedback
- Authors: Marcel Torne, Max Balsells, Zihan Wang, Samedh Desai, Tao Chen, Pulkit
Agrawal, Abhishek Gupta
- Abstract summary: We present a technique called Human Guided Exploration (HuGE), which uses low-quality feedback from non-expert users.
HuGE guides exploration for reinforcement learning not only in simulation but also in the real world, all without meticulous reward specification.
- Score: 22.89046164459011
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Exploration and reward specification are fundamental and intertwined
challenges for reinforcement learning. Solving sequential decision-making tasks
requiring expansive exploration requires either careful design of reward
functions or the use of novelty-seeking exploration bonuses. Human supervisors
can provide effective guidance in the loop to direct the exploration process,
but prior methods to leverage this guidance require constant synchronous
high-quality human feedback, which is expensive and impractical to obtain. In
this work, we present a technique called Human Guided Exploration (HuGE), which
uses low-quality feedback from non-expert users that may be sporadic,
asynchronous, and noisy. HuGE guides exploration for reinforcement learning not
only in simulation but also in the real world, all without meticulous reward
specification. The key concept involves bifurcating human feedback and policy
learning: human feedback steers exploration, while self-supervised learning
from the exploration data yields unbiased policies. This procedure can leverage
noisy, asynchronous human feedback to learn policies with no hand-crafted
reward design or exploration bonuses. HuGE is able to learn a variety of
challenging multi-stage robotic navigation and manipulation tasks in simulation
using crowdsourced feedback from non-expert users. Moreover, this paradigm can
be scaled to learning directly on real-world robots, using occasional,
asynchronous feedback from human supervisors.
Related papers
- Accelerating Exploration with Unlabeled Prior Data [66.43995032226466]
We study how prior data without reward labels may be used to guide and accelerate exploration for an agent solving a new sparse reward task.
We propose a simple approach that learns a reward model from online experience, labels the unlabeled prior data with optimistic rewards, and then uses it concurrently alongside the online data for downstream policy and critic optimization.
arXiv Detail & Related papers (2023-11-09T00:05:17Z) - Autonomous Robotic Reinforcement Learning with Asynchronous Human
Feedback [27.223725464754853]
GEAR enables robots to be placed in real-world environments and left to train autonomously without interruption.
System streams robot experience to a web interface only requiring occasional asynchronous feedback from remote, crowdsourced, non-expert humans.
arXiv Detail & Related papers (2023-10-31T16:43:56Z) - Learning Reward Functions for Robotic Manipulation by Observing Humans [92.30657414416527]
We use unlabeled videos of humans solving a wide range of manipulation tasks to learn a task-agnostic reward function for robotic manipulation policies.
The learned rewards are based on distances to a goal in an embedding space learned using a time-contrastive objective.
arXiv Detail & Related papers (2022-11-16T16:26:48Z) - Reward Uncertainty for Exploration in Preference-based Reinforcement
Learning [88.34958680436552]
We present an exploration method specifically for preference-based reinforcement learning algorithms.
Our main idea is to design an intrinsic reward by measuring the novelty based on learned reward.
Our experiments show that exploration bonus from uncertainty in learned reward improves both feedback- and sample-efficiency of preference-based RL algorithms.
arXiv Detail & Related papers (2022-05-24T23:22:10Z) - Follow your Nose: Using General Value Functions for Directed Exploration
in Reinforcement Learning [5.40729975786985]
This paper explores the idea of combining exploration with auxiliary task learning using General Value Functions (GVFs) and a directed exploration strategy.
We provide a simple way to learn options (sequences of actions) instead of having to handcraft them, and demonstrate the performance advantage in three navigation tasks.
arXiv Detail & Related papers (2022-03-02T05:14:11Z) - Deep Exploration for Recommendation Systems [14.937000494745861]
We develop deep exploration methods for recommendation systems.
In particular, we formulate recommendation as a sequential decision problem.
Our experiments are carried out with high-fidelity industrial-grade simulators.
arXiv Detail & Related papers (2021-09-26T06:54:26Z) - PEBBLE: Feedback-Efficient Interactive Reinforcement Learning via
Relabeling Experience and Unsupervised Pre-training [94.87393610927812]
We present an off-policy, interactive reinforcement learning algorithm that capitalizes on the strengths of both feedback and off-policy learning.
We demonstrate that our approach is capable of learning tasks of higher complexity than previously considered by human-in-the-loop methods.
arXiv Detail & Related papers (2021-06-09T14:10:50Z) - Batch Exploration with Examples for Scalable Robotic Reinforcement
Learning [63.552788688544254]
Batch Exploration with Examples (BEE) explores relevant regions of the state-space guided by a modest number of human provided images of important states.
BEE is able to tackle challenging vision-based manipulation tasks both in simulation and on a real Franka robot.
arXiv Detail & Related papers (2020-10-22T17:49:25Z) - Never Give Up: Learning Directed Exploration Strategies [63.19616370038824]
We propose a reinforcement learning agent to solve hard exploration games by learning a range of directed exploratory policies.
We construct an episodic memory-based intrinsic reward using k-nearest neighbors over the agent's recent experience to train the directed exploratory policies.
A self-supervised inverse dynamics model is used to train the embeddings of the nearest neighbour lookup, biasing the novelty signal towards what the agent can control.
arXiv Detail & Related papers (2020-02-14T13:57:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.