Kernel Density Bayesian Inverse Reinforcement Learning
- URL: http://arxiv.org/abs/2303.06827v2
- Date: Thu, 12 Oct 2023 23:04:25 GMT
- Title: Kernel Density Bayesian Inverse Reinforcement Learning
- Authors: Aishwarya Mandyam, Didong Li, Diana Cai, Andrew Jones, Barbara E.
Engelhardt
- Abstract summary: Inverse reinforcement learning(IRL) is a powerful framework to infer an agent's reward function by observing its behavior.
We introduce kernel density Bayesian IRL, which uses conditional kernel density estimation to directly approximate the likelihood.
- Score: 6.03878742235195
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Inverse reinforcement learning~(IRL) is a powerful framework to infer an
agent's reward function by observing its behavior, but IRL algorithms that
learn point estimates of the reward function can be misleading because there
may be several functions that describe an agent's behavior equally well. A
Bayesian approach to IRL models a distribution over candidate reward functions,
alleviating the shortcomings of learning a point estimate. However, several
Bayesian IRL algorithms use a $Q$-value function in place of the likelihood
function. The resulting posterior is computationally intensive to calculate,
has few theoretical guarantees, and the $Q$-value function is often a poor
approximation for the likelihood. We introduce kernel density Bayesian IRL
(KD-BIRL), which uses conditional kernel density estimation to directly
approximate the likelihood, providing an efficient framework that, with a
modified reward function parameterization, is applicable to environments with
complex and infinite state spaces. We demonstrate KD-BIRL's benefits through a
series of experiments in Gridworld environments and a simulated sepsis
treatment task.
Related papers
- Promises and Pitfalls of the Linearized Laplace in Bayesian Optimization [73.80101701431103]
The linearized-Laplace approximation (LLA) has been shown to be effective and efficient in constructing Bayesian neural networks.
We study the usefulness of the LLA in Bayesian optimization and highlight its strong performance and flexibility.
arXiv Detail & Related papers (2023-04-17T14:23:43Z) - Generalized Differentiable RANSAC [95.95627475224231]
$nabla$-RANSAC is a differentiable RANSAC that allows learning the entire randomized robust estimation pipeline.
$nabla$-RANSAC is superior to the state-of-the-art in terms of accuracy while running at a similar speed to its less accurate alternatives.
arXiv Detail & Related papers (2022-12-26T15:13:13Z) - Maximum-Likelihood Inverse Reinforcement Learning with Finite-Time
Guarantees [56.848265937921354]
Inverse reinforcement learning (IRL) aims to recover the reward function and the associated optimal policy.
Many algorithms for IRL have an inherently nested structure.
We develop a novel single-loop algorithm for IRL that does not compromise reward estimation accuracy.
arXiv Detail & Related papers (2022-10-04T17:13:45Z) - Provably Efficient Offline Reinforcement Learning with Trajectory-Wise
Reward [66.81579829897392]
We propose a novel offline reinforcement learning algorithm called Pessimistic vAlue iteRaTion with rEward Decomposition (PARTED)
PARTED decomposes the trajectory return into per-step proxy rewards via least-squares-based reward redistribution, and then performs pessimistic value based on the learned proxy reward.
To the best of our knowledge, PARTED is the first offline RL algorithm that is provably efficient in general MDP with trajectory-wise reward.
arXiv Detail & Related papers (2022-06-13T19:11:22Z) - Probability Density Estimation Based Imitation Learning [11.262633728487165]
Imitation Learning (IL) is an effective learning paradigm exploiting the interactions between agents and environments.
In this work, a novel reward function based on probability density estimation is proposed for IRL.
We present a "watch-try-learn" style framework named Probability Density Estimation based Imitation Learning (PDEIL)
arXiv Detail & Related papers (2021-12-13T15:55:38Z) - Efficient Exploration of Reward Functions in Inverse Reinforcement
Learning via Bayesian Optimization [43.51553742077343]
inverse reinforcement learning (IRL) is relevant to a variety of tasks including value alignment and robot learning from demonstration.
This paper presents an IRL framework called Bayesian optimization-IRL (BO-IRL) which identifies multiple solutions consistent with the expert demonstrations.
arXiv Detail & Related papers (2020-11-17T10:17:45Z) - Provably Efficient Reward-Agnostic Navigation with Linear Value
Iteration [143.43658264904863]
We show how iteration under a more standard notion of low inherent Bellman error, typically employed in least-square value-style algorithms, can provide strong PAC guarantees on learning a near optimal value function.
We present a computationally tractable algorithm for the reward-free setting and show how it can be used to learn a near optimal policy for any (linear) reward function.
arXiv Detail & Related papers (2020-08-18T04:34:21Z) - Reinforcement Learning with General Value Function Approximation:
Provably Efficient Approach via Bounded Eluder Dimension [124.7752517531109]
We establish a provably efficient reinforcement learning algorithm with general value function approximation.
We show that our algorithm achieves a regret bound of $widetildeO(mathrmpoly(dH)sqrtT)$ where $d$ is a complexity measure.
Our theory generalizes recent progress on RL with linear value function approximation and does not make explicit assumptions on the model of the environment.
arXiv Detail & Related papers (2020-05-21T17:36:09Z) - $\pi$VAE: a stochastic process prior for Bayesian deep learning with
MCMC [2.4792948967354236]
We propose a novel variational autoencoder called the prior encodingal autoencoder ($pi$VAE)
We show that our framework can accurately learn expressive function classes such as Gaussian processes, but also properties of functions to enable statistical inference.
Perhaps most usefully, we demonstrate that the low dimensional distributed latent space representation learnt provides an elegant and scalable means of performing inference for processes within programming languages such as Stan.
arXiv Detail & Related papers (2020-02-17T10:23:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.