Geometric Value Iteration: Dynamic Error-Aware KL Regularization for
Reinforcement Learning
- URL: http://arxiv.org/abs/2107.07659v1
- Date: Fri, 16 Jul 2021 01:24:37 GMT
- Title: Geometric Value Iteration: Dynamic Error-Aware KL Regularization for
Reinforcement Learning
- Authors: Toshinori Kitamura, Lingwei Zhu, Takamitsu Matsubara
- Abstract summary: We study the dynamic coefficient scheme and present the first error bound.
We propose an effective scheme to tune coefficient according to the magnitude of error in favor of more robust learning.
Our experiments demonstrate that GVI can effectively exploit the trade-off between learning speed and robustness over uniform averaging of constant KL coefficient.
- Score: 11.82492300303637
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The recent booming of entropy-regularized literature reveals that
Kullback-Leibler (KL) regularization brings advantages to Reinforcement
Learning (RL) algorithms by canceling out errors under mild assumptions.
However, existing analyses focus on fixed regularization with a constant
weighting coefficient and have not considered the case where the coefficient is
allowed to change dynamically. In this paper, we study the dynamic coefficient
scheme and present the first asymptotic error bound. Based on the dynamic
coefficient error bound, we propose an effective scheme to tune the coefficient
according to the magnitude of error in favor of more robust learning. On top of
this development, we propose a novel algorithm: Geometric Value Iteration (GVI)
that features a dynamic error-aware KL coefficient design aiming to mitigate
the impact of errors on the performance. Our experiments demonstrate that GVI
can effectively exploit the trade-off between learning speed and robustness
over uniform averaging of constant KL coefficient. The combination of GVI and
deep networks shows stable learning behavior even in the absence of a target
network where algorithms with a constant KL coefficient would greatly oscillate
or even fail to converge.
Related papers
- Decentralized Nonconvex Composite Federated Learning with Gradient Tracking and Momentum [78.27945336558987]
Decentralized server (DFL) eliminates reliance on client-client architecture.
Non-smooth regularization is often incorporated into machine learning tasks.
We propose a novel novel DNCFL algorithm to solve these problems.
arXiv Detail & Related papers (2025-04-17T08:32:25Z) - A Robust Model-Based Approach for Continuous-Time Policy Evaluation with Unknown Lévy Process Dynamics [1.0923877073891446]
This paper develops a model-based framework for continuous-time policy evaluation.
It incorporates both Brownian and L'evy noise to model dynamics influenced by rare and extreme events.
arXiv Detail & Related papers (2025-04-02T08:37:14Z) - Generalized Kullback-Leibler Divergence Loss [105.66549870868971]
We prove that the Kullback-Leibler (KL) Divergence loss is equivalent to the Decoupled Kullback-Leibler (DKL) Divergence loss.
Thanks to the decoupled structure of DKL loss, we have identified two areas for improvement.
arXiv Detail & Related papers (2025-03-11T04:43:33Z) - An Accelerated Alternating Partial Bregman Algorithm for ReLU-based Matrix Decomposition [0.0]
In this paper, we aim to investigate the sparse low-rank characteristics rectified on non-negative matrices.
We propose a novel regularization term incorporating useful structures in clustering and compression tasks.
We derive corresponding closed-form solutions while maintaining the $L$-smooth property always holds for any $Lge 1$.
arXiv Detail & Related papers (2025-03-04T08:20:34Z) - Logarithmic Regret for Online KL-Regularized Reinforcement Learning [51.113248212150964]
KL-regularization plays a pivotal role in improving efficiency of RL fine-tuning for large language models.
Despite its empirical advantage, the theoretical difference between KL-regularized RL and standard RL remains largely under-explored.
We propose an optimistic-based KL-regularized online contextual bandit algorithm, and provide a novel analysis of its regret.
arXiv Detail & Related papers (2025-02-11T11:11:05Z) - Muti-Fidelity Prediction and Uncertainty Quantification with Laplace Neural Operators for Parametric Partial Differential Equations [6.03891813540831]
Laplace Neural Operators (LNOs) have emerged as a promising approach in scientific machine learning.
We propose multi-fidelity Laplace Neural Operators (MF-LNOs), which combine a low-fidelity base model with parallel linear/nonlinear HF correctors and dynamic inter-fidelity weighting.
This allows us to exploit correlations between LF and HF datasets and achieve accurate inference of quantities of interest.
arXiv Detail & Related papers (2025-02-01T20:38:50Z) - Learning Controlled Stochastic Differential Equations [61.82896036131116]
This work proposes a novel method for estimating both drift and diffusion coefficients of continuous, multidimensional, nonlinear controlled differential equations with non-uniform diffusion.
We provide strong theoretical guarantees, including finite-sample bounds for (L2), (Linfty), and risk metrics, with learning rates adaptive to coefficients' regularity.
Our method is available as an open-source Python library.
arXiv Detail & Related papers (2024-11-04T11:09:58Z) - Fast Value Tracking for Deep Reinforcement Learning [7.648784748888187]
Reinforcement learning (RL) tackles sequential decision-making problems by creating agents that interact with their environment.
Existing algorithms often view these problem as static, focusing on point estimates for model parameters to maximize expected rewards.
Our research leverages the Kalman paradigm to introduce a novel quantification and sampling algorithm called Langevinized Kalman TemporalTD.
arXiv Detail & Related papers (2024-03-19T22:18:19Z) - Adaptive Federated Learning Over the Air [108.62635460744109]
We propose a federated version of adaptive gradient methods, particularly AdaGrad and Adam, within the framework of over-the-air model training.
Our analysis shows that the AdaGrad-based training algorithm converges to a stationary point at the rate of $mathcalO( ln(T) / T 1 - frac1alpha ).
arXiv Detail & Related papers (2024-03-11T09:10:37Z) - Mitigating Covariate Shift in Misspecified Regression with Applications
to Reinforcement Learning [39.02112341007981]
We study the effect of distribution shift in the presence of model misspecification.
We show that empirical risk minimization, or standard least squares regression, can result in undesirable misspecification amplification.
We develop a new algorithm that avoids this undesirable behavior, resulting in no misspecification amplification while still obtaining optimal statistical rates.
arXiv Detail & Related papers (2024-01-22T18:59:12Z) - Temporal Difference Learning with Compressed Updates: Error-Feedback meets Reinforcement Learning [47.904127007515925]
We study a variant of the classical temporal difference (TD) learning algorithm with a perturbed update direction.
We prove that compressed TD algorithms, coupled with an error-feedback mechanism used widely in optimization, exhibit the same non-asymptotic approximation guarantees as their counterparts.
Notably, these are the first finite-time results in RL that account for general compression operators and error-feedback in tandem with linear function approximation and Markovian sampling.
arXiv Detail & Related papers (2023-01-03T04:09:38Z) - Robust Learning via Persistency of Excitation [4.674053902991301]
We show that network training using gradient descent is equivalent to a dynamical system parameter estimation problem.
We provide an efficient technique for estimating the corresponding Lipschitz constant using extreme value theory.
Our approach also universally increases the adversarial accuracy by 0.1% to 0.3% points in various state-of-the-art adversarially trained models.
arXiv Detail & Related papers (2021-06-03T18:49:05Z) - Training Generative Adversarial Networks by Solving Ordinary
Differential Equations [54.23691425062034]
We study the continuous-time dynamics induced by GAN training.
From this perspective, we hypothesise that instabilities in training GANs arise from the integration error.
We experimentally verify that well-known ODE solvers (such as Runge-Kutta) can stabilise training.
arXiv Detail & Related papers (2020-10-28T15:23:49Z) - Cogradient Descent for Bilinear Optimization [124.45816011848096]
We introduce a Cogradient Descent algorithm (CoGD) to address the bilinear problem.
We solve one variable by considering its coupling relationship with the other, leading to a synchronous gradient descent.
Our algorithm is applied to solve problems with one variable under the sparsity constraint.
arXiv Detail & Related papers (2020-06-16T13:41:54Z) - On Learning Rates and Schr\"odinger Operators [105.32118775014015]
We present a general theoretical analysis of the effect of the learning rate.
We find that the learning rate tends to zero for a broad non- neural class functions.
arXiv Detail & Related papers (2020-04-15T09:52:37Z) - Leverage the Average: an Analysis of KL Regularization in RL [44.01222241795292]
We show that Kullback-Leibler (KL) regularization implicitly averages q-values.
We provide a very strong performance bound, the very first to combine two desirable aspects.
Some of our assumptions do not hold with neural networks, so we complement this theoretical analysis with an extensive empirical study.
arXiv Detail & Related papers (2020-03-31T10:55:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.