Policy Zooming: Adaptive Discretization-based Infinite-Horizon Average-Reward Reinforcement Learning
- URL: http://arxiv.org/abs/2405.18793v2
- Date: Fri, 23 Aug 2024 12:35:25 GMT
- Title: Policy Zooming: Adaptive Discretization-based Infinite-Horizon Average-Reward Reinforcement Learning
- Authors: Avik Kar, Rahul Singh,
- Abstract summary: We develop an algorithm PZRL that discretizes the state-action space adaptively and zooms in to promising regions of the "policy space"
We show that the regret of PZRL can be bounded as $tildemathcalObig(T1 - d_texteff.-1big)$.
- Score: 2.2984209387877628
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study infinite-horizon average-reward reinforcement learning (RL) for Lipschitz MDPs and develop an algorithm PZRL that discretizes the state-action space adaptively and zooms in to promising regions of the "policy space" which seems to yield high average rewards. We show that the regret of PZRL can be bounded as $\tilde{\mathcal{O}}\big(T^{1 - d_{\text{eff.}}^{-1}}\big)$, where $d_{\text{eff.}}= 2d_\mathcal{S} + d^\Phi_z+2$, $d_\mathcal{S}$ is the dimension of the state space, and $d^\Phi_z$ is the zooming dimension. $d^\Phi_z$ is a problem-dependent quantity that depends not only on the underlying MDP but also the class of policies $\Phi$ used by the agent, which allows us to conclude that if the agent apriori knows that optimal policy belongs to a low-complexity class (that has small $d^\Phi_z$), then its regret will be small. The current work shows how to capture adaptivity gains for infinite-horizon average-reward RL in terms of $d^\Phi_z$. We note that the preexisting notions of zooming dimension are adept at handling only the episodic RL case since zooming dimension approaches covering dimension of state-action space as $T\to\infty$ and hence do not yield any possible adaptivity gains. Several experiments are conducted to evaluate the performance of PZRL. PZRL outperforms other state-of-the-art algorithms; this clearly demonstrates the gains arising due to adaptivity.
Related papers
- Provably Adaptive Average Reward Reinforcement Learning for Metric Spaces [2.2984209387877628]
We develop an algorithm ZoRL that discretizes the state-action space adaptively and zooms into promising regions of the state-action space.
ZoRL outperforms other state-of-the-art algorithms in experiments.
arXiv Detail & Related papers (2024-10-25T18:14:42Z) - Provably Efficient CVaR RL in Low-rank MDPs [58.58570425202862]
We study risk-sensitive Reinforcement Learning (RL)
We propose a novel Upper Confidence Bound (UCB) bonus-driven algorithm to balance interplay between exploration, exploitation, and representation learning in CVaR RL.
We prove that our algorithm achieves a sample complexity of $epsilon$-optimal CVaR, where $H$ is the length of each episode, $A$ is the capacity of action space, and $d$ is the dimension of representations.
arXiv Detail & Related papers (2023-11-20T17:44:40Z) - Horizon-free Reinforcement Learning in Adversarial Linear Mixture MDPs [72.40181882916089]
We show that our algorithm achieves an $tildeObig((d+log (|mathcalS|2 |mathcalA|))sqrtKbig)$ regret with full-information feedback, where $d$ is the dimension of a known feature mapping linearly parametrizing the unknown transition kernel of the MDP, $K$ is the number of episodes, $|mathcalS|$ and $|mathcalA|$ are the cardinalities of the state and action spaces
arXiv Detail & Related papers (2023-05-15T05:37:32Z) - Provably Efficient Offline Reinforcement Learning with Trajectory-Wise
Reward [66.81579829897392]
We propose a novel offline reinforcement learning algorithm called Pessimistic vAlue iteRaTion with rEward Decomposition (PARTED)
PARTED decomposes the trajectory return into per-step proxy rewards via least-squares-based reward redistribution, and then performs pessimistic value based on the learned proxy reward.
To the best of our knowledge, PARTED is the first offline RL algorithm that is provably efficient in general MDP with trajectory-wise reward.
arXiv Detail & Related papers (2022-06-13T19:11:22Z) - Human-in-the-loop: Provably Efficient Preference-based Reinforcement
Learning with General Function Approximation [107.54516740713969]
We study human-in-the-loop reinforcement learning (RL) with trajectory preferences.
Instead of receiving a numeric reward at each step, the agent only receives preferences over trajectory pairs from a human overseer.
We propose the first optimistic model-based algorithm for PbRL with general function approximation.
arXiv Detail & Related papers (2022-05-23T09:03:24Z) - Reward-Free RL is No Harder Than Reward-Aware RL in Linear Markov
Decision Processes [61.11090361892306]
Reward-free reinforcement learning (RL) considers the setting where the agent does not have access to a reward function during exploration.
We show that this separation does not exist in the setting of linear MDPs.
We develop a computationally efficient algorithm for reward-free RL in a $d$-dimensional linear MDP.
arXiv Detail & Related papers (2022-01-26T22:09:59Z) - Towards Instance-Optimal Offline Reinforcement Learning with Pessimism [34.54294677335518]
We study the offline reinforcement learning (offline RL) problem, where the goal is to learn a reward-maximizing policy in an unknown Markov Decision Process (MDP)
In this work, we analyze the Adaptive Pessimistic Value Iteration (APVI) algorithm and derive the suboptimality upper bound that nearly matches [ Oleft(sum_h=1Hsum_s_h,a_hdpistar_h(s_h,a_h)sqrtfracmathrmmathrmVar_
arXiv Detail & Related papers (2021-10-17T01:21:52Z) - Policy Optimization in Adversarial MDPs: Improved Exploration via
Dilated Bonuses [40.12297110530343]
We develop a general solution that adds dilated bonuses to the policy update to facilitate global exploration.
We apply it to several episodic MDP settings with adversarial losses and bandit feedback.
When a simulator is unavailable, we further consider a linear MDP setting and obtain $widetildemathcalO(T14/15)$ regret.
arXiv Detail & Related papers (2021-07-18T02:30:48Z) - Agnostic Reinforcement Learning with Low-Rank MDPs and Rich Observations [79.66404989555566]
We consider the more realistic setting of agnostic RL with rich observation spaces and a fixed class of policies $Pi$ that may not contain any near-optimal policy.
We provide an algorithm for this setting whose error is bounded in terms of the rank $d$ of the underlying MDP.
arXiv Detail & Related papers (2021-06-22T03:20:40Z) - Zooming for Efficient Model-Free Reinforcement Learning in Metric Spaces [26.297887542066505]
We consider episodic reinforcement learning with a continuous state-action space which is assumed to be equipped with a natural metric.
We propose ZoomRL, an online algorithm that leverages ideas from continuous bandits to learn an adaptive discretization of the joint space.
We show that ZoomRL achieves a worst-case regret $tildeO(Hfrac52 Kfracd+1d+2)$ where $H$ is the planning horizon, $K$ is the number of episodes and $d$ is the covering dimension of the space.
arXiv Detail & Related papers (2020-03-09T12:32:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.