Planning and Learning with Adaptive Lookahead
- URL: http://arxiv.org/abs/2201.12403v1
- Date: Fri, 28 Jan 2022 20:26:55 GMT
- Title: Planning and Learning with Adaptive Lookahead
- Authors: Aviv Rosenberg and Assaf Hallak and Shie Mannor and Gal Chechik and
Gal Dalal
- Abstract summary: Policy Iteration (PI) algorithm alternates between greedy one-step policy improvement and policy evaluation.
Recent literature shows that multi-step lookahead policy improvement leads to a better convergence rate at the expense of increased complexity per iteration.
We propose for the first time to dynamically adapt the multi-step lookahead horizon as a function of the state and of the value estimate.
- Score: 74.39132848733847
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The classical Policy Iteration (PI) algorithm alternates between greedy
one-step policy improvement and policy evaluation. Recent literature shows that
multi-step lookahead policy improvement leads to a better convergence rate at
the expense of increased complexity per iteration. However, prior to running
the algorithm, one cannot tell what is the best fixed lookahead horizon.
Moreover, per a given run, using a lookahead of horizon larger than one is
often wasteful. In this work, we propose for the first time to dynamically
adapt the multi-step lookahead horizon as a function of the state and of the
value estimate. We devise two PI variants and analyze the trade-off between
iteration count and computational complexity per iteration. The first variant
takes the desired contraction factor as the objective and minimizes the
per-iteration complexity. The second variant takes as input the computational
complexity per iteration and minimizes the overall contraction factor. We then
devise a corresponding DQN-based algorithm with an adaptive tree search
horizon. We also include a novel enhancement for on-policy learning: per-depth
value function estimator. Lastly, we demonstrate the efficacy of our adaptive
lookahead method in a maze environment and in Atari.
Related papers
- Fully Adaptive Regret-Guaranteed Algorithm for Control of Linear Quadratic Systems [0.2455468619225742]
First algorithm for Linear Quadratic (LQ) control problem features a regret of $mathcalO(sqrtT)$
We propose the first fully adaptive algorithm that controls the number of policy updates (i.e. tunes the exploration-exploitation trade-off)
We show that through careful exploration-exploitation trade-off adjustment there is no need to commit to the widely-used notion of strong sequential stability.
arXiv Detail & Related papers (2024-06-11T22:04:59Z) - Towards Efficient and Optimal Covariance-Adaptive Algorithms for Combinatorial Semi-Bandits [12.674929126684528]
We address the problem of semi-bandits, where a player selects among P actions from the power set of a set containing d base items.
We show that our approach efficiently leverages the semi-bandit feedback and outperforms bandit feedback approaches.
arXiv Detail & Related papers (2024-02-23T08:07:54Z) - Maximum-Likelihood Inverse Reinforcement Learning with Finite-Time
Guarantees [56.848265937921354]
Inverse reinforcement learning (IRL) aims to recover the reward function and the associated optimal policy.
Many algorithms for IRL have an inherently nested structure.
We develop a novel single-loop algorithm for IRL that does not compromise reward estimation accuracy.
arXiv Detail & Related papers (2022-10-04T17:13:45Z) - Alternating Mahalanobis Distance Minimization for Stable and Accurate CP
Decomposition [4.847980206213335]
We introduce a new formulation for deriving singular values and vectors of a tensor by considering the critical points of a function different from what is used in the previous work.
We show that a subsweep of this algorithm can achieve a superlinear convergence rate for exact CPD with known rank.
We then view the algorithm as optimizing a Mahalanobis distance with respect to each factor with ground metric dependent on the other factors.
arXiv Detail & Related papers (2022-04-14T19:56:36Z) - Correcting Momentum with Second-order Information [50.992629498861724]
We develop a new algorithm for non-critical optimization that finds an $O(epsilon)$epsilon point in the optimal product.
We validate our results on a variety of large-scale deep learning benchmarks and architectures.
arXiv Detail & Related papers (2021-03-04T19:01:20Z) - Improved Regret Bound and Experience Replay in Regularized Policy
Iteration [22.621710838468097]
We study algorithms for learning in infinite-horizon undiscounted Markov decision processes (MDPs) with function approximation.
We first show that the regret analysis of the Politex algorithm can be sharpened from $O(T3/4)$ to $O(sqrtT)$ under nearly identical assumptions.
Our result provides the first high-probability $O(sqrtT)$ regret bound for a computationally efficient algorithm in this setting.
arXiv Detail & Related papers (2021-02-25T00:55:07Z) - Online Model Selection for Reinforcement Learning with Function
Approximation [50.008542459050155]
We present a meta-algorithm that adapts to the optimal complexity with $tildeO(L5/6 T2/3)$ regret.
We also show that the meta-algorithm automatically admits significantly improved instance-dependent regret bounds.
arXiv Detail & Related papers (2020-11-19T10:00:54Z) - Single-Timescale Stochastic Nonconvex-Concave Optimization for Smooth
Nonlinear TD Learning [145.54544979467872]
We propose two single-timescale single-loop algorithms that require only one data point each step.
Our results are expressed in a form of simultaneous primal and dual side convergence.
arXiv Detail & Related papers (2020-08-23T20:36:49Z) - A Two-Timescale Framework for Bilevel Optimization: Complexity Analysis
and Application to Actor-Critic [142.1492359556374]
Bilevel optimization is a class of problems which exhibit a two-level structure.
We propose a two-timescale approximation (TTSA) algorithm for tackling such a bilevel problem.
We show that a two-timescale natural actor-critic policy optimization algorithm can be viewed as a special case of our TTSA framework.
arXiv Detail & Related papers (2020-07-10T05:20:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.