Active Exploration for Inverse Reinforcement Learning
- URL: http://arxiv.org/abs/2207.08645v4
- Date: Tue, 22 Aug 2023 14:11:07 GMT
- Title: Active Exploration for Inverse Reinforcement Learning
- Authors: David Lindner and Andreas Krause and Giorgia Ramponi
- Abstract summary: We propose a novel IRL algorithm: Active exploration for Inverse Reinforcement Learning (AceIRL)
AceIRL actively explores an unknown environment and expert policy to quickly learn the expert's reward function and identify a good policy.
We empirically evaluate AceIRL in simulations and find that it significantly outperforms more naive exploration strategies.
- Score: 58.295273181096036
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Inverse Reinforcement Learning (IRL) is a powerful paradigm for inferring a
reward function from expert demonstrations. Many IRL algorithms require a known
transition model and sometimes even a known expert policy, or they at least
require access to a generative model. However, these assumptions are too strong
for many real-world applications, where the environment can be accessed only
through sequential interaction. We propose a novel IRL algorithm: Active
exploration for Inverse Reinforcement Learning (AceIRL), which actively
explores an unknown environment and expert policy to quickly learn the expert's
reward function and identify a good policy. AceIRL uses previous observations
to construct confidence intervals that capture plausible reward functions and
find exploration policies that focus on the most informative regions of the
environment. AceIRL is the first approach to active IRL with sample-complexity
bounds that does not require a generative model of the environment. AceIRL
matches the sample complexity of active IRL with a generative model in the
worst case. Additionally, we establish a problem-dependent bound that relates
the sample complexity of AceIRL to the suboptimality gap of a given IRL
problem. We empirically evaluate AceIRL in simulations and find that it
significantly outperforms more naive exploration strategies.
Related papers
- Non-Adversarial Inverse Reinforcement Learning via Successor Feature Matching [23.600285251963395]
In inverse reinforcement learning (IRL), an agent seeks to replicate expert demonstrations through interactions with the environment.
Traditionally, IRL is treated as an adversarial game, where an adversary searches over reward models, and a learner optimize the reward through repeated RL procedures.
We propose a novel approach to IRL by direct policy optimization, exploiting a linear factorization of the return as the inner product of successor features and a reward vector.
arXiv Detail & Related papers (2024-11-11T14:05:50Z) - Random Latent Exploration for Deep Reinforcement Learning [71.88709402926415]
This paper introduces a new exploration technique called Random Latent Exploration (RLE)
RLE combines the strengths of bonus-based and noise-based (two popular approaches for effective exploration in deep RL) exploration strategies.
We evaluate it on the challenging Atari and IsaacGym benchmarks and show that RLE exhibits higher overall scores across all the tasks than other approaches.
arXiv Detail & Related papers (2024-07-18T17:55:22Z) - Is Inverse Reinforcement Learning Harder than Standard Reinforcement
Learning? A Theoretical Perspective [55.36819597141271]
Inverse Reinforcement Learning (IRL) -- the problem of learning reward functions from demonstrations of an emphexpert policy -- plays a critical role in developing intelligent systems.
This paper provides the first line of efficient IRL in vanilla offline and online settings using samples and runtime.
As an application, we show that the learned rewards can emphtransfer to another target MDP with suitable guarantees.
arXiv Detail & Related papers (2023-11-29T00:09:01Z) - Exploratory State Representation Learning [63.942632088208505]
We propose a new approach called XSRL (eXploratory State Representation Learning) to solve the problems of exploration and SRL in parallel.
On one hand, it jointly learns compact state representations and a state transition estimator which is used to remove unexploitable information from the representations.
On the other hand, it continuously trains an inverse model, and adds to the prediction error of this model a $k$-step learning progress bonus to form the objective of a discovery policy.
arXiv Detail & Related papers (2021-09-28T10:11:07Z) - Focus on Impact: Indoor Exploration with Intrinsic Motivation [45.97756658635314]
In this work, we propose to train a model with a purely intrinsic reward signal to guide exploration.
We include a neural-based density model and replace the traditional count-based regularization with an estimated pseudo-count of previously visited states.
We also show that a robot equipped with the proposed approach seamlessly adapts to point-goal navigation and real-world deployment.
arXiv Detail & Related papers (2021-09-14T18:00:07Z) - OPIRL: Sample Efficient Off-Policy Inverse Reinforcement Learning via
Distribution Matching [12.335788185691916]
Inverse Reinforcement Learning (IRL) is attractive in scenarios where reward engineering can be tedious.
Prior IRL algorithms use on-policy transitions, which require intensive sampling from the current policy for stable and optimal performance.
We present Off-Policy Inverse Reinforcement Learning (OPIRL), which adopts off-policy data distribution instead of on-policy.
arXiv Detail & Related papers (2021-09-09T14:32:26Z) - Online reinforcement learning with sparse rewards through an active
inference capsule [62.997667081978825]
This paper introduces an active inference agent which minimizes the novel free energy of the expected future.
Our model is capable of solving sparse-reward problems with a very high sample efficiency.
We also introduce a novel method for approximating the prior model from the reward function, which simplifies the expression of complex objectives.
arXiv Detail & Related papers (2021-06-04T10:03:36Z) - Exploration by Maximizing R\'enyi Entropy for Reward-Free RL Framework [28.430845498323745]
We consider a reward-free reinforcement learning framework that separates exploration from exploitation.
In the exploration phase, the agent learns an exploratory policy by interacting with a reward-free environment.
In the planning phase, the agent computes a good policy for any reward function based on the dataset.
arXiv Detail & Related papers (2020-06-11T05:05:31Z) - Never Give Up: Learning Directed Exploration Strategies [63.19616370038824]
We propose a reinforcement learning agent to solve hard exploration games by learning a range of directed exploratory policies.
We construct an episodic memory-based intrinsic reward using k-nearest neighbors over the agent's recent experience to train the directed exploratory policies.
A self-supervised inverse dynamics model is used to train the embeddings of the nearest neighbour lookup, biasing the novelty signal towards what the agent can control.
arXiv Detail & Related papers (2020-02-14T13:57:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.