Langevin Dynamics for Adaptive Inverse Reinforcement Learning of
Stochastic Gradient Algorithms
- URL: http://arxiv.org/abs/2006.11674v2
- Date: Mon, 18 Jan 2021 14:40:09 GMT
- Title: Langevin Dynamics for Adaptive Inverse Reinforcement Learning of
Stochastic Gradient Algorithms
- Authors: Vikram Krishnamurthy and George Yin
- Abstract summary: Inverse reinforcement learning (IRL) aims to estimate the reward function of optimizing agents by observing their response.
We present a generalized Langevin dynamics to estimate the reward function $R(theta)$.
The proposed IRL algorithms use kernel-based passive learning schemes and generate samples from the distribution proportional to $exp(R(theta)$.
- Score: 21.796874356469644
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Inverse reinforcement learning (IRL) aims to estimate the reward function of
optimizing agents by observing their response (estimates or actions). This
paper considers IRL when noisy estimates of the gradient of a reward function
generated by multiple stochastic gradient agents are observed. We present a
generalized Langevin dynamics algorithm to estimate the reward function
$R(\theta)$; specifically, the resulting Langevin algorithm asymptotically
generates samples from the distribution proportional to $\exp(R(\theta))$. The
proposed IRL algorithms use kernel-based passive learning schemes. We also
construct multi-kernel passive Langevin algorithms for IRL which are suitable
for high dimensional data. The performance of the proposed IRL algorithms are
illustrated on examples in adaptive Bayesian learning, logistic regression
(high dimensional problem) and constrained Markov decision processes. We prove
weak convergence of the proposed IRL algorithms using martingale averaging
methods. We also analyze the tracking performance of the IRL algorithms in
non-stationary environments where the utility function $R(\theta)$ jump changes
over time as a slow Markov chain.
Related papers
- Provable and Practical: Efficient Exploration in Reinforcement Learning via Langevin Monte Carlo [104.9535542833054]
We present a scalable and effective exploration strategy based on Thompson sampling for reinforcement learning (RL)
We instead directly sample the Q function from its posterior distribution, by using Langevin Monte Carlo.
Our approach achieves better or similar results compared with state-of-the-art deep RL algorithms on several challenging exploration tasks from the Atari57 suite.
arXiv Detail & Related papers (2023-05-29T17:11:28Z) - Finite-Sample Bounds for Adaptive Inverse Reinforcement Learning using
Passive Langevin Dynamics [15.878313629774269]
This paper provides a finite-sample analysis of a passive gradient Langevin dynamics (PSGLD) algorithm designed to achieve adaptive inverse reinforcement learning (IRL)
We obtain finite-sample bounds on the 2-Wasserstein distance between the estimates generated by the PSGLD algorithm and the cost function.
This work extends a line of research in analysis of passive adaptive gradient algorithms to the finite-sample regime for Langevin dynamics.
arXiv Detail & Related papers (2023-04-18T16:39:51Z) - Maximum-Likelihood Inverse Reinforcement Learning with Finite-Time
Guarantees [56.848265937921354]
Inverse reinforcement learning (IRL) aims to recover the reward function and the associated optimal policy.
Many algorithms for IRL have an inherently nested structure.
We develop a novel single-loop algorithm for IRL that does not compromise reward estimation accuracy.
arXiv Detail & Related papers (2022-10-04T17:13:45Z) - On Reward-Free RL with Kernel and Neural Function Approximations:
Single-Agent MDP and Markov Game [140.19656665344917]
We study the reward-free RL problem, where an agent aims to thoroughly explore the environment without any pre-specified reward function.
We tackle this problem under the context of function approximation, leveraging powerful function approximators.
We establish the first provably efficient reward-free RL algorithm with kernel and neural function approximators.
arXiv Detail & Related papers (2021-10-19T07:26:33Z) - Momentum Accelerates the Convergence of Stochastic AUPRC Maximization [80.8226518642952]
We study optimization of areas under precision-recall curves (AUPRC), which is widely used for imbalanced tasks.
We develop novel momentum methods with a better iteration of $O (1/epsilon4)$ for finding an $epsilon$stationary solution.
We also design a novel family of adaptive methods with the same complexity of $O (1/epsilon4)$, which enjoy faster convergence in practice.
arXiv Detail & Related papers (2021-07-02T16:21:52Z) - Randomized Exploration for Reinforcement Learning with General Value
Function Approximation [122.70803181751135]
We propose a model-free reinforcement learning algorithm inspired by the popular randomized least squares value iteration (RLSVI) algorithm.
Our algorithm drives exploration by simply perturbing the training data with judiciously chosen i.i.d. scalar noises.
We complement the theory with an empirical evaluation across known difficult exploration tasks.
arXiv Detail & Related papers (2021-06-15T02:23:07Z) - An Efficient Algorithm for Deep Stochastic Contextual Bandits [10.298368632706817]
In contextual bandit problems, an agent selects an action based on certain observed context to maximize the reward over iterations.
Recently there have been a few studies using a deep neural network (DNN) to predict the expected reward for an action, and is trained by a gradient based method.
arXiv Detail & Related papers (2021-04-12T16:34:43Z) - Reinforcement Learning with General Value Function Approximation:
Provably Efficient Approach via Bounded Eluder Dimension [124.7752517531109]
We establish a provably efficient reinforcement learning algorithm with general value function approximation.
We show that our algorithm achieves a regret bound of $widetildeO(mathrmpoly(dH)sqrtT)$ where $d$ is a complexity measure.
Our theory generalizes recent progress on RL with linear value function approximation and does not make explicit assumptions on the model of the environment.
arXiv Detail & Related papers (2020-05-21T17:36:09Z) - Accelerated Learning with Robustness to Adversarial Regressors [1.0499611180329802]
We propose a new discrete time algorithm which provides stability and convergence guarantees in the presence of adversarial regressors.
In particular, our algorithm reaches an $epsilon$ sub-optimal point in at most $tildemathcalO (1/sqrtepsilon)$ when regressors are constant.
arXiv Detail & Related papers (2020-05-04T14:42:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.