RLAnything: Forge Environment, Policy, and Reward Model in Completely Dynamic RL System
- URL: http://arxiv.org/abs/2602.02488v1
- Date: Mon, 02 Feb 2026 18:59:04 GMT
- Title: RLAnything: Forge Environment, Policy, and Reward Model in Completely Dynamic RL System
- Authors: Yinjie Wang, Tianbao Xie, Ke Shen, Mengdi Wang, Ling Yang,
- Abstract summary: We propose RLAnything, a reinforcement learning framework that forges environment, policy, and reward models through closed-loop optimization.<n>Specifically, the policy is trained with integrated feedback from step-wise and outcome signals.<n>Our theory-motivated automatic environment adaptation improves training for both the reward and policy models.
- Score: 52.3348044324205
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose RLAnything, a reinforcement learning framework that dynamically forges environment, policy, and reward models through closed-loop optimization, amplifying learning signals and strengthening the overall RL system for any LLM or agentic scenarios. Specifically, the policy is trained with integrated feedback from step-wise and outcome signals, while the reward model is jointly optimized via consistency feedback, which in turn further improves policy training. Moreover, our theory-motivated automatic environment adaptation improves training for both the reward and policy models by leveraging critic feedback from each, enabling learning from experience. Empirically, each added component consistently improves the overall system, and RLAnything yields substantial gains across various representative LLM and agentic tasks, boosting Qwen3-VL-8B-Thinking by 9.1% on OSWorld and Qwen2.5-7B-Instruct by 18.7% and 11.9% on AlfWorld and LiveBench, respectively. We also that optimized reward-model signals outperform outcomes that rely on human labels. Code: https://github.com/Gen-Verse/Open-AgentRL
Related papers
- RLVE: Scaling Up Reinforcement Learning for Language Models with Adaptive Verifiable Environments [111.87296453908199]
We introduce Reinforcement Learning with Adaptive Verifiable Environments (RLVE)<n>RLVE enables each verifiable environment to dynamically adapt its problem difficulty distribution to the policy model's capabilities as training progresses.<n>We show that environment scaling, i.e., expanding the collection of training environments, consistently improves reasoning capabilities.
arXiv Detail & Related papers (2025-11-10T17:18:35Z) - RLoop: An Self-Improving Framework for Reinforcement Learning with Iterative Policy Initialization [65.23034604711489]
We introduce RLoop, a self-improving framework for training large reasoning models.<n>RLoop transforms the standard training process into a virtuous cycle: it first uses RL to explore the solution space from a given policy, then filters the successful trajectories to create an expert dataset.<n>Our experiments show RLoops forgetting and substantially improves generalization, boosting average accuracy by 9% and pass@32 by over 15% compared to vanilla RL.
arXiv Detail & Related papers (2025-11-06T11:27:16Z) - Stronger Together: On-Policy Reinforcement Learning for Collaborative LLMs [20.084201133669534]
Multi-agent systems (MAS) and reinforcement learning (RL) are widely used to enhance the agentic capabilities of large language models (LLMs)<n>Applying on-policy RL to MAS remains underexplored and presents unique challenges.<n>We propose AT-GRPO, which includes (i) an agent- and turn-wise grouped RL algorithm tailored to MAS and (ii) a training system that supports both single- and multi-policy regimes.
arXiv Detail & Related papers (2025-10-13T06:55:09Z) - SPARK: Synergistic Policy And Reward Co-Evolving Framework [84.22494672256894]
We introduce the Synergistic Policy And Reward Co-Evolving Framework (SPARK), an efficient, on-policy, and stable method that builds on RLVR.<n>Instead of discarding rollouts and correctness data, SPARK recycles this valuable information to simultaneously train the model itself as a generative reward model.<n>We show that SPARK achieves significant performance gains on multiple LLM and LVLM models and multiple reasoning, reward models, and general benchmarks.
arXiv Detail & Related papers (2025-09-26T17:50:12Z) - Agentic Reinforcement Learning with Implicit Step Rewards [92.26560379363492]
Large language models (LLMs) are increasingly developed as autonomous agents using reinforcement learning (agentic RL)<n>We introduce implicit step rewards for agentic RL (iStar), a general credit-assignment strategy that integrates seamlessly with standard RL algorithms.<n>We evaluate our method on three challenging agent benchmarks, including WebShop and VisualSokoban, as well as open-ended social interactions with unverifiable rewards in SOTOPIA.
arXiv Detail & Related papers (2025-09-23T16:15:42Z) - RAGEN: Understanding Self-Evolution in LLM Agents via Multi-Turn Reinforcement Learning [125.96848846966087]
Training large language models (LLMs) as interactive agents presents unique challenges.<n>While reinforcement learning has enabled progress in static tasks, multi-turn agent RL training remains underexplored.<n>We propose StarPO, a general framework for trajectory-level agent RL, and introduce RAGEN, a modular system for training and evaluating LLM agents.
arXiv Detail & Related papers (2025-04-24T17:57:08Z) - VLM-RL: A Unified Vision Language Models and Reinforcement Learning Framework for Safe Autonomous Driving [1.3107174618549584]
Reinforcement learning (RL)-based methods for learning driving policies have gained increasing attention in the autonomous driving community.<n>Traditional RL approaches rely on manually engineered rewards, which require extensive human effort and often lack generalizability.<n>We propose textbfVLM-RL, a unified framework that integrates pre-trained Vision-Language Models (VLMs) with RL to generate reward signals.
arXiv Detail & Related papers (2024-12-20T04:08:11Z) - Policy Agnostic RL: Offline RL and Online RL Fine-Tuning of Any Class and Backbone [72.17534881026995]
We develop an offline and online fine-tuning approach called policy-agnostic RL (PA-RL)<n>We show the first result that successfully fine-tunes OpenVLA, a 7B generalist robot policy, autonomously with Cal-QL, an online RL fine-tuning algorithm.
arXiv Detail & Related papers (2024-12-09T17:28:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.