GAC: Stabilizing Asynchronous RL Training for LLMs via Gradient Alignment Control
- URL: http://arxiv.org/abs/2603.01501v1
- Date: Mon, 02 Mar 2026 06:19:43 GMT
- Title: GAC: Stabilizing Asynchronous RL Training for LLMs via Gradient Alignment Control
- Authors: Haofeng Xu, Junwei Su, Yukun Tian, Lansong Diao, Zhengping Qian, Chuan Wu,
- Abstract summary: We show that naively applying asynchrony to policy-gradient updates can induce qualitatively different training dynamics and lead to severe training instability.<n>We propose GRADIENT ALIGNMENT CONTROL, a simple dynamics-aware stabilization method that regulates asynchronous RL progress along stale-aligned directions.
- Score: 16.529035487811267
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Asynchronous execution is essential for scaling reinforcement learning (RL) to modern large model workloads, including large language models and AI agents, but it can fundamentally alter RL optimization behavior. While prior work on asynchronous RL focuses on training throughput and distributional correction, we show that naively applying asynchrony to policy-gradient updates can induce qualitatively different training dynamics and lead to severe training instability. Through systematic empirical and theoretical analysis, we identify a key signature of this instability: asynchronous training exhibits persistently high cosine similarity between consecutive policy gradients, in contrast to the near-orthogonal updates observed under synchronized training. This stale-aligned gradient effect amplifies correlated updates and increases the risk of overshooting and divergence. Motivated by this observation, we propose GRADIENT ALIGNMENT CONTROL(GAC), a simple dynamics-aware stabilization method that regulates asynchronous RL progress along stale-aligned directions via gradient projection. We establish convergence guarantees under bounded staleness and demonstrate empirically that GAC recovers stable, on-policy training dynamics and matches synchronized baselines even at high staleness.
Related papers
- ARLArena: A Unified Framework for Stable Agentic Reinforcement Learning [75.73135757250806]
Agentic reinforcement learning (ARL) has rapidly gained attention as a promising paradigm for training agents to solve complex, multi-step interactive tasks.<n>Despite encouraging early results, ARL remains highly unstable, often leading to training collapse.<n>In this paper, we first propose ARLArena, a stable training recipe and systematic analysis framework that examines training stability in a controlled and reproducible setting.
arXiv Detail & Related papers (2026-02-25T03:43:34Z) - RL-VLA$^3$: Reinforcement Learning VLA Accelerating via Full Asynchronism [42.27384804295299]
Vision-Language-Action (VLA) models have emerged as a crucial pathway towards general embodied intelligence.<n>This paper proposes and implements a fully-asynchronous policy training framework encompassing the entire pipeline from environment interaction to actor policy updates.<n>On the LIBERO benchmark, the framework achieves throughput improvements of up to 59.25% compared to existing synchronous strategies.
arXiv Detail & Related papers (2026-02-05T15:30:23Z) - Stabilizing Reinforcement Learning with LLMs: Formulation and Practices [61.361819972410046]
We show why and under what conditions the true sequence-level reward can be optimized via a surrogate token-level objective in policy gradient methods such as REINFORCE.<n>This insight provides a principled explanation for the crucial role of several widely adopted techniques in stabilizing RL training.
arXiv Detail & Related papers (2025-12-01T07:45:39Z) - Periodic Asynchrony: An Effective Method for Accelerating Reinforcement Learning [8.395046547177806]
reinforcement learning (RL) has attracted increasing attention, with growing efforts to reproduce and apply it.<n>In mainstream RL frameworks, inference and training are typically deployed on the same devices.<n>In this study, we are returning to the strategy of separating inference and training deployment.<n>We transform the conventional synchronous architecture into a periodically asynchronous framework, which allows for demand-driven, independent, and elastic scaling of each component.
arXiv Detail & Related papers (2025-11-24T08:22:50Z) - Iterative Refinement of Flow Policies in Probability Space for Online Reinforcement Learning [56.47948583452555]
We introduce the Stepwise Flow Policy (SWFP) framework, founded on the key insight that discretizing the flow matching inference process via a fixed-step Euler scheme aligns it with the variational Jordan-Kinderlehrer-Otto principle from optimal transport.<n>SWFP decomposes the global flow into a sequence of small, incremental transformations between proximate distributions.<n>This decomposition yields an efficient algorithm that fine-tunes pre-trained flows via a cascade of small flow blocks, offering significant advantages.
arXiv Detail & Related papers (2025-10-17T07:43:51Z) - ROCM: RLHF on consistency models [8.905375742101707]
We propose a reward optimization framework for applying RLHF to consistency models.<n>We investigate various $f$-divergences as regularization strategies, striking a balance between reward and model consistency.
arXiv Detail & Related papers (2025-03-08T11:19:48Z) - Continual Task Learning through Adaptive Policy Self-Composition [54.95680427960524]
CompoFormer is a structure-based continual transformer model that adaptively composes previous policies via a meta-policy network.
Our experiments reveal that CompoFormer outperforms conventional continual learning (CL) methods, particularly in longer task sequences.
arXiv Detail & Related papers (2024-11-18T08:20:21Z) - Parallelized Reverse Curriculum Generation [62.25453821794469]
For reinforcement learning, it is challenging for an agent to master a task that requires a specific series of actions due to sparse rewards.
reverse curriculum generation (RCG) provides a reverse expansion approach that automatically generates a curriculum for the agent to learn.
We propose a parallelized approach that simultaneously trains multiple AC pairs and periodically exchanges their critics.
arXiv Detail & Related papers (2021-08-04T15:58:35Z) - Training Generative Adversarial Networks by Solving Ordinary
Differential Equations [54.23691425062034]
We study the continuous-time dynamics induced by GAN training.
From this perspective, we hypothesise that instabilities in training GANs arise from the integration error.
We experimentally verify that well-known ODE solvers (such as Runge-Kutta) can stabilise training.
arXiv Detail & Related papers (2020-10-28T15:23:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.