Demystifying Group Relative Policy Optimization: Its Policy Gradient is a U-Statistic
- URL: http://arxiv.org/abs/2603.01162v2
- Date: Tue, 03 Mar 2026 11:46:35 GMT
- Title: Demystifying Group Relative Policy Optimization: Its Policy Gradient is a U-Statistic
- Authors: Hongyi Zhou, Kai Ye, Erhan Xu, Jin Zhu, Ying Yang, Shijin Gong, Chengchun Shi,
- Abstract summary: Relative policy optimization is a core methodological component of DeepSeekMath and DeepSeek-R1.<n>This paper provides a unified framework to understand GRPO through the lens of classical U-statistics.
- Score: 12.256817975993128
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Group relative policy optimization (GRPO), a core methodological component of DeepSeekMath and DeepSeek-R1, has emerged as a cornerstone for scaling reasoning capabilities of large language models. Despite its widespread adoption and the proliferation of follow-up works, the theoretical properties of GRPO remain less studied. This paper provides a unified framework to understand GRPO through the lens of classical U-statistics. We demonstrate that the GRPO policy gradient is inherently a U-statistic, allowing us to characterize its mean squared error (MSE), derive the finite-sample error bound and asymptotic distribution of the suboptimality gap for its learned policy. Our findings reveal that GRPO is asymptotically equivalent to an oracle policy gradient algorithm -- one with access to a value function that quantifies the goodness of its learning policy at each training iteration -- and achieves asymptotically optimal performance within a broad class of policy gradient algorithms. Furthermore, we establish a universal scaling law that offers principled guidance for selecting the optimal group size. Empirical experiments further validate our theoretical findings, demonstrating that the optimal group size is universal, and verify the oracle property of GRPO.
Related papers
- iGRPO: Self-Feedback-Driven LLM Reasoning [88.83313431248473]
Large Language Models (LLMs) have shown promise in solving complex mathematical problems, yet they still fall short of producing accurate and consistent solutions.<n>We introduce Iterative Group Relative Policy Optimization (iGRPO), a two-stage extension of GRPO that adds dynamic self-conditioning through model-generated drafts.<n>Under matched rollout budgets, iGRPO consistently outperforms GRPO across base models.
arXiv Detail & Related papers (2026-02-09T18:45:11Z) - TL-GRPO: Turn-Level RL for Reasoning-Guided Iterative Optimization [97.18886232580131]
Large language models have demonstrated strong reasoning capabilities in complex tasks through tool integration.<n>We propose Turn-Level GRPO, a lightweight RL algorithm that performs turn-level group sampling for fine-grained optimization.
arXiv Detail & Related papers (2026-01-23T06:21:33Z) - Performative Policy Gradient: Optimality in Performative Reinforcement Learning [13.777823115521665]
Post-deployment machine learning algorithms often influence the environments they act in.<n>We introduce the Performative Policy Gradient algorithm (PePG)<n>PePG converges to performatively optimal policies, i.e. policies that remain optimal under the distribution shifts induced by themselves.
arXiv Detail & Related papers (2025-12-23T18:20:06Z) - GRPO-RM: Fine-Tuning Representation Models via GRPO-Driven Reinforcement Learning [52.16150076582931]
We propose Group Relative Policy Optimization for Representation Model (GRPO-RM)<n>Our method establishes a predefined output set to functionally replace token sequence sampling in large language models (LLMs)<n>A specialized reward function is designed to accommodate the properties of representation models.
arXiv Detail & Related papers (2025-11-19T09:19:39Z) - Uncalibrated Reasoning: GRPO Induces Overconfidence for Stochastic Outcomes [55.2480439325792]
Reinforcement learning (RL) has proven remarkably effective at improving the accuracy of language models in verifiable and deterministic domains like mathematics.<n>Here, we examine if current RL methods are also effective at optimizing language models in verifiable domains with outcomes, like scientific experiments.
arXiv Detail & Related papers (2025-08-15T20:50:53Z) - On the Theory and Practice of GRPO: A Trajectory-Corrected Approach with Fast Convergence [2.8165669455824696]
Group Relative Policy Optimization is a critic-free reinforcement learning algorithm.<n>We show that GRPO update rule estimates the policy gradient at the old policy rather than the current one.<n>We propose a new algorithm: Trajectory level Importance Corrected GRPO.
arXiv Detail & Related papers (2025-08-04T19:01:19Z) - Convergence and Sample Complexity of First-Order Methods for Agnostic Reinforcement Learning [66.4260157478436]
We study reinforcement learning in the policy learning setting.<n>The goal is to find a policy whose performance is competitive with the best policy in a given class of interest.
arXiv Detail & Related papers (2025-07-06T14:40:05Z) - RL-finetuning LLMs from on- and off-policy data with a single algorithm [53.70731390624718]
We introduce a novel reinforcement learning algorithm (AGRO) for fine-tuning large-language models.<n>AGRO leverages the concept of generation consistency, which states that the optimal policy satisfies the notion of consistency across any possible generation of the model.<n>We derive algorithms that find optimal solutions via the sample-based policy gradient and provide theoretical guarantees on their convergence.
arXiv Detail & Related papers (2025-03-25T12:52:38Z) - On the Global Optimality of Policy Gradient Methods in General Utility Reinforcement Learning [30.767979998925437]
Reinforcement learning with general utilities (RLGU) offers a unifying framework to capture problems beyond standard expected returns.<n>Recent advances in theoretical analysis of policy gradient (PG) methods for standard RL and recent efforts in RLGU still remain limited.<n>We establish global optimality guarantees of PG methods for RLGU in which the objective is a general concave utility function of the state-action occupancy measure.
arXiv Detail & Related papers (2024-10-05T10:24:07Z) - Matryoshka Policy Gradient for Entropy-Regularized RL: Convergence and Global Optimality [0.5261718469769449]
A novel Policy Gradient (PG) algorithm, called $textitMatryoshka Policy Gradient$ (MPG), is introduced and studied.
We prove uniqueness and characterize the optimal policy of the entropy regularized objective, together with global convergence of MPG.
As a proof of concept, we evaluate numerically MPG on standard test benchmarks.
arXiv Detail & Related papers (2023-03-22T17:56:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.