Learning more skills through optimistic exploration
- URL: http://arxiv.org/abs/2107.14226v1
- Date: Thu, 29 Jul 2021 17:58:04 GMT
- Title: Learning more skills through optimistic exploration
- Authors: DJ Strouse, Kate Baumli, David Warde-Farley, Vlad Mnih, Steven Hansen
- Abstract summary: Unsupervised skill learning objectives allow agents to learn rich repertoires of behavior in the absence of extrinsic rewards.
An inherent exploration problem lingers: when a novel state is actually encountered, the discriminator will not have seen enough training data to produce accurate and confident skill classifications.
We derive an information gain auxiliary objective that involves training an ensemble of discriminators and rewarding the policy for their disagreement.
Our objective directly estimates the uncertainty that comes from the discriminator not having seen enough training examples, thus providing an intrinsic reward more tailored to the true objective.
- Score: 5.973112138143177
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised skill learning objectives (Gregor et al., 2016, Eysenbach et
al., 2018) allow agents to learn rich repertoires of behavior in the absence of
extrinsic rewards. They work by simultaneously training a policy to produce
distinguishable latent-conditioned trajectories, and a discriminator to
evaluate distinguishability by trying to infer latents from trajectories. The
hope is for the agent to explore and master the environment by encouraging each
skill (latent) to reliably reach different states. However, an inherent
exploration problem lingers: when a novel state is actually encountered, the
discriminator will necessarily not have seen enough training data to produce
accurate and confident skill classifications, leading to low intrinsic reward
for the agent and effective penalization of the sort of exploration needed to
actually maximize the objective. To combat this inherent pessimism towards
exploration, we derive an information gain auxiliary objective that involves
training an ensemble of discriminators and rewarding the policy for their
disagreement. Our objective directly estimates the epistemic uncertainty that
comes from the discriminator not having seen enough training examples, thus
providing an intrinsic reward more tailored to the true objective compared to
pseudocount-based methods (Burda et al., 2019). We call this exploration bonus
discriminator disagreement intrinsic reward, or DISDAIN. We demonstrate
empirically that DISDAIN improves skill learning both in a tabular grid world
(Four Rooms) and the 57 games of the Atari Suite (from pixels). Thus, we
encourage researchers to treat pessimism with DISDAIN.
Related papers
- Successor-Predecessor Intrinsic Exploration [18.440869985362998]
We focus on exploration with intrinsic rewards, where the agent transiently augments the external rewards with self-generated intrinsic rewards.
We propose Successor-Predecessor Intrinsic Exploration (SPIE), an exploration algorithm based on a novel intrinsic reward combining prospective and retrospective information.
We show that SPIE yields more efficient and ethologically plausible exploratory behaviour in environments with sparse rewards and bottleneck states than competing methods.
arXiv Detail & Related papers (2023-05-24T16:02:51Z) - DEIR: Efficient and Robust Exploration through
Discriminative-Model-Based Episodic Intrinsic Rewards [2.09711130126031]
Exploration is a fundamental aspect of reinforcement learning (RL), and its effectiveness is a deciding factor in the performance of RL algorithms.
Recent studies have shown the effectiveness of encouraging exploration with intrinsic rewards estimated from novelties in observations.
We propose DEIR, a novel method in which we theoretically derive an intrinsic reward with a conditional mutual information term.
arXiv Detail & Related papers (2023-04-21T06:39:38Z) - Self-supervised network distillation: an effective approach to exploration in sparse reward environments [0.0]
Reinforcement learning can train an agent to behave in an environment according to a predesigned reward function.
The solution to such a problem may be to equip the agent with an intrinsic motivation that will provide informed exploration.
We present Self-supervised Network Distillation (SND), a class of intrinsic motivation algorithms based on the distillation error as a novelty indicator.
arXiv Detail & Related papers (2023-02-22T18:58:09Z) - Long-Term Exploration in Persistent MDPs [68.8204255655161]
We propose an exploration method called Rollback-Explore (RbExplore)
In this paper, we propose an exploration method called Rollback-Explore (RbExplore), which utilizes the concept of the persistent Markov decision process.
We test our algorithm in the hard-exploration Prince of Persia game, without rewards and domain knowledge.
arXiv Detail & Related papers (2021-09-21T13:47:04Z) - Explore and Control with Adversarial Surprise [78.41972292110967]
Reinforcement learning (RL) provides a framework for learning goal-directed policies given user-specified rewards.
We propose a new unsupervised RL technique based on an adversarial game which pits two policies against each other to compete over the amount of surprise an RL agent experiences.
We show that our method leads to the emergence of complex skills by exhibiting clear phase transitions.
arXiv Detail & Related papers (2021-07-12T17:58:40Z) - Disturbing Reinforcement Learning Agents with Corrupted Rewards [62.997667081978825]
We analyze the effects of different attack strategies based on reward perturbations on reinforcement learning algorithms.
We show that smoothly crafting adversarial rewards are able to mislead the learner, and that using low exploration probability values, the policy learned is more robust to corrupt rewards.
arXiv Detail & Related papers (2021-02-12T15:53:48Z) - Semi-supervised reward learning for offline reinforcement learning [71.6909757718301]
Training agents usually requires reward functions, but rewards are seldom available in practice and their engineering is challenging and laborious.
We propose semi-supervised learning algorithms that learn from limited annotations and incorporate unlabelled data.
In our experiments with a simulated robotic arm, we greatly improve upon behavioural cloning and closely approach the performance achieved with ground truth rewards.
arXiv Detail & Related papers (2020-12-12T20:06:15Z) - GRIMGEP: Learning Progress for Robust Goal Sampling in Visual Deep
Reinforcement Learning [21.661530291654692]
We propose a framework that allows agents to autonomously identify and ignore noisy distracting regions.
Our framework can be combined with any state-of-the-art novelty seeking goal exploration approaches.
arXiv Detail & Related papers (2020-08-10T19:50:06Z) - Show me the Way: Intrinsic Motivation from Demonstrations [44.87651595571687]
We show that complex exploration behaviors, reflecting different motivations, can be learnt and efficiently used by RL agents to solve tasks for which exhaustive exploration is prohibitive.
We propose to learn an exploration bonus from demonstrations that could transfer these motivations to an artificial agent with little assumptions about their rationale.
arXiv Detail & Related papers (2020-06-23T11:52:53Z) - Never Give Up: Learning Directed Exploration Strategies [63.19616370038824]
We propose a reinforcement learning agent to solve hard exploration games by learning a range of directed exploratory policies.
We construct an episodic memory-based intrinsic reward using k-nearest neighbors over the agent's recent experience to train the directed exploratory policies.
A self-supervised inverse dynamics model is used to train the embeddings of the nearest neighbour lookup, biasing the novelty signal towards what the agent can control.
arXiv Detail & Related papers (2020-02-14T13:57:22Z) - Combating False Negatives in Adversarial Imitation Learning [67.99941805086154]
In adversarial imitation learning, a discriminator is trained to differentiate agent episodes from expert demonstrations representing the desired behavior.
As the trained policy learns to be more successful, the negative examples become increasingly similar to expert ones.
We propose a method to alleviate the impact of false negatives and test it on the BabyAI environment.
arXiv Detail & Related papers (2020-02-02T14:56:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.