Fractional Policy Gradients: Reinforcement Learning with Long-Term Memory
- URL: http://arxiv.org/abs/2507.00073v1
- Date: Sun, 29 Jun 2025 04:57:41 GMT
- Title: Fractional Policy Gradients: Reinforcement Learning with Long-Term Memory
- Authors: Urvi Pawar, Kunal Telangi,
- Abstract summary: Fractional Policy Gradients (FPG) is a reinforcement learning framework incorporating fractional calculus for long-term temporal modeling in policy optimization.<n>By reformulating using Caputo fractional derivatives, FPG establishes power-law temporal correlations between state transitions.<n> Empirical validation demonstrates 35-68% sample efficiency gains and 24-52% variance reduction versus state-of-the-art baselines.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose Fractional Policy Gradients (FPG), a reinforcement learning framework incorporating fractional calculus for long-term temporal modeling in policy optimization. Standard policy gradient approaches face limitations from Markovian assumptions, exhibiting high variance and inefficient sampling. By reformulating gradients using Caputo fractional derivatives, FPG establishes power-law temporal correlations between state transitions. We develop an efficient recursive computation technique for fractional temporal-difference errors with constant time and memory requirements. Theoretical analysis shows FPG achieves asymptotic variance reduction of order O(t^(-alpha)) versus standard policy gradients while preserving convergence. Empirical validation demonstrates 35-68% sample efficiency gains and 24-52% variance reduction versus state-of-the-art baselines. This framework provides a mathematically grounded approach for leveraging long-range dependencies without computational overhead.
Related papers
- On the Design of KL-Regularized Policy Gradient Algorithms for LLM Reasoning [50.856589224454055]
Policy gradient algorithms have been successfully applied to enhance the reasoning capabilities of large language models (LLMs)<n>We propose regularized policy gradient (RPG), a framework for deriving and analyzing KL-regularized policy gradient methods in the online reinforcement learning setting.<n>RPG shows improved or competitive results in terms of training stability and performance compared to strong baselines such as GRPO, REINFORCE++, and DAPO.
arXiv Detail & Related papers (2025-05-23T06:01:21Z) - Decentralized Nonconvex Composite Federated Learning with Gradient Tracking and Momentum [78.27945336558987]
Decentralized server (DFL) eliminates reliance on client-client architecture.<n>Non-smooth regularization is often incorporated into machine learning tasks.<n>We propose a novel novel DNCFL algorithm to solve these problems.
arXiv Detail & Related papers (2025-04-17T08:32:25Z) - Statistical Inference for Temporal Difference Learning with Linear Function Approximation [62.69448336714418]
We investigate the statistical properties of Temporal Difference learning with Polyak-Ruppert averaging.<n>We make three significant contributions that improve the current state-of-the-art results.
arXiv Detail & Related papers (2024-10-21T15:34:44Z) - Strongly-polynomial time and validation analysis of policy gradient methods [3.722665817361884]
This paper proposes a novel termination criterion, termed the advantage gap function, for finite state and action Markov decision processes (MDP) and reinforcement learning (RL)<n>By incorporating this advantage gap function into the design of step size rules, we deriving a new linear rate of convergence that is independent of the stationary state distribution of the optimal policy.<n>This is the first time that such strong convergence properties have been established for policy gradient methods.
arXiv Detail & Related papers (2024-09-28T18:56:48Z) - On Bellman equations for continuous-time policy evaluation I: discretization and approximation [3.704688279256839]
We study the problem of computing the value function from a discretely-observed trajectory of a continuous-time diffusion process.
We develop a new class of algorithms based on easily implementable numerical schemes that are compatible with discrete-time reinforcement learning.
arXiv Detail & Related papers (2024-07-08T14:05:03Z) - On the Global Convergence of Policy Gradient in Average Reward Markov
Decision Processes [50.68789924454235]
We present the first finite time global convergence analysis of policy gradient in the context of average reward Markov decision processes (MDPs)
Our analysis shows that the policy gradient iterates converge to the optimal policy at a sublinear rate of $Oleft(frac1Tright),$ which translates to $Oleft(log(T)right)$ regret, where $T$ represents the number of iterations.
arXiv Detail & Related papers (2024-03-11T15:25:03Z) - Model-Based Reparameterization Policy Gradient Methods: Theory and
Practical Algorithms [88.74308282658133]
Reization (RP) Policy Gradient Methods (PGMs) have been widely adopted for continuous control tasks in robotics and computer graphics.
Recent studies have revealed that, when applied to long-term reinforcement learning problems, model-based RP PGMs may experience chaotic and non-smooth optimization landscapes.
We propose a spectral normalization method to mitigate the exploding variance issue caused by long model unrolls.
arXiv Detail & Related papers (2023-10-30T18:43:21Z) - Optimization Landscape of Policy Gradient Methods for Discrete-time
Static Output Feedback [22.21598324895312]
This paper analyzes the optimization landscape inherent to policy gradient methods when applied to static output feedback control.
We derive novel findings regarding convergence (and nearly dimension-free rate) to stationary points for three policy gradient methods.
We provide proof that the vanilla policy gradient method exhibits linear convergence towards local minima when near such minima.
arXiv Detail & Related papers (2023-10-29T14:25:57Z) - Theoretical Guarantees of Fictitious Discount Algorithms for Episodic
Reinforcement Learning and Global Convergence of Policy Gradient Methods [6.7546872379126155]
A common approach is to introduce a fictitious discount factor and use stationary policies for approximations.
This paper takes the first step towards analyzing these algorithms.
Non-asymptotic convergence guarantees are established for both algorithms.
arXiv Detail & Related papers (2021-09-13T23:36:38Z) - Variance-Reduced Off-Policy Memory-Efficient Policy Search [61.23789485979057]
Off-policy policy optimization is a challenging problem in reinforcement learning.
Off-policy algorithms are memory-efficient and capable of learning from off-policy samples.
arXiv Detail & Related papers (2020-09-14T16:22:46Z) - Policy Gradient using Weak Derivatives for Reinforcement Learning [24.50189361694407]
This paper considers policy search in continuous state-action reinforcement learning problems.
The gradient estimates obtained using weak derivatives is shown to be lower than those obtained using the popular score-function approach.
arXiv Detail & Related papers (2020-04-09T23:05:18Z) - Is Temporal Difference Learning Optimal? An Instance-Dependent Analysis [102.29671176698373]
We address the problem of policy evaluation in discounted decision processes, and provide Markov-dependent guarantees on the $ell_infty$error under a generative model.
We establish both and non-asymptotic versions of local minimax lower bounds for policy evaluation, thereby providing an instance-dependent baseline by which to compare algorithms.
arXiv Detail & Related papers (2020-03-16T17:15:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.