CodeScaler: Scaling Code LLM Training and Test-Time Inference via Execution-Free Reward Models
- URL: http://arxiv.org/abs/2602.17684v1
- Date: Wed, 04 Feb 2026 17:56:00 GMT
- Title: CodeScaler: Scaling Code LLM Training and Test-Time Inference via Execution-Free Reward Models
- Authors: Xiao Zhu, Xinyu Zhou, Boyu Zhu, Hanxu Hu, Mingzhe Du, Haotian Zhang, Huiming Wang, Zhijiang Guo,
- Abstract summary: Execution-free reward model designed to scale both reinforcement learning training and test-time inference for code generation.<n>CodeScaler improves Qwen3-8B-Base by an average of +11.72 points, outperforming binary execution-based RL by +1.82 points.
- Score: 32.910307078704996
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Reinforcement Learning from Verifiable Rewards (RLVR) has driven recent progress in code large language models by leveraging execution-based feedback from unit tests, but its scalability is fundamentally constrained by the availability and reliability of high-quality test cases. We propose CodeScaler, an execution-free reward model designed to scale both reinforcement learning training and test-time inference for code generation. CodeScaler is trained on carefully curated preference data derived from verified code problems and incorporates syntax-aware code extraction and validity-preserving reward shaping to ensure stable and robust optimization. Across five coding benchmarks, CodeScaler improves Qwen3-8B-Base by an average of +11.72 points, outperforming binary execution-based RL by +1.82 points, and enables scalable reinforcement learning on synthetic datasets without any test cases. At inference time, CodeScaler serves as an effective test-time scaling method, achieving performance comparable to unit test approaches while providing a 10-fold reduction in latency. Moreover, CodeScaler surpasses existing reward models on RM-Bench not only in the code domain (+3.3 points), but also in general and reasoning domains (+2.7 points on average).
Related papers
- CVeDRL: An Efficient Code Verifier via Difficulty-aware Reinforcement Learning [57.24524263804788]
Code verifiers play a critical role in post-verification for LLM-based code generation.<n>Existing supervised fine-tuning methods suffer from data scarcity, high failure rates, and poor inference efficiency.<n>We show that naive RL with only functionality rewards fails to generate effective unit tests for difficult branches and samples.
arXiv Detail & Related papers (2026-01-30T10:33:29Z) - Aletheia: What Makes RLVR For Code Verifiers Tick? [51.371034079170435]
Verifiers trained via Reinforcement Learning from Verifiable Rewards (RLVR) are a prominent fixture of the Large Language Model (LLM) post-training pipeline.<n>Code verifiers remain valuable toward judging model outputs in scenarios where execution feedback is hard to obtain.<n>We examine components of the RLVR-based verifier training recipe widely credited for its success.
arXiv Detail & Related papers (2026-01-17T22:30:45Z) - CodeRL+: Improving Code Generation via Reinforcement with Execution Semantics Alignment [98.87395842351627]
Large Language Models (LLMs) excel at code generation by learning from vast code corpora.<n>A fundamental semantic gap remains between their training on textual patterns and the goal of functional correctness.<n>We propose CodeRL+, a novel approach that integrates execution semantics alignment into the RLVR training pipeline for code generation.
arXiv Detail & Related papers (2025-10-21T09:48:06Z) - Co-Evolving LLM Coder and Unit Tester via Reinforcement Learning [48.66688117533318]
We propose CURE, a novel reinforcement learning framework with a dedicated reward design.<n>CURE co-evolves coding and unit test generation capabilities based on their interaction outcomes.<n>We find that our model can serve as an effective reward model for reinforcement learning on base models.
arXiv Detail & Related papers (2025-06-03T17:58:42Z) - CRPE: Expanding The Reasoning Capability of Large Language Model for Code Generation [5.63821063617385]
CRPE (Code Reasoning Process Enhancer) is a framework for data synthesis and model training.<n>We develop an enhanced COT-Coder that demonstrates marked improvements in code generation tasks.<n>Our COT-Coder-32B-StepDPO, based on Qwen2.5-Coder-32B-Base, exhibits superior performance with a pass@1 accuracy of 35.08, outperforming GPT4O on the benchmark.
arXiv Detail & Related papers (2025-05-15T08:13:45Z) - Learning to Solve and Verify: A Self-Play Framework for Code and Test Generation [69.62857948698436]
Recent advances in large language models (LLMs) have improved their performance on coding benchmarks.<n>However, improvement is plateauing due to the exhaustion of readily available high-quality data.<n>We propose Sol-Ver, a self-play solver-verifier framework that jointly improves a single model's code and test generation capacity.
arXiv Detail & Related papers (2025-02-20T18:32:19Z) - StepCoder: Improve Code Generation with Reinforcement Learning from
Compiler Feedback [58.20547418182074]
We introduce StepCoder, a novel framework for code generation, consisting of two main components.
CCCS addresses the exploration challenge by breaking the long sequences code generation task into a Curriculum of Code Completion Subtasks.
FGO only optimize the model by masking the unexecuted code segments to provide Fine-Grained Optimization.
Our method improves the ability to explore the output space and outperforms state-of-the-art approaches in corresponding benchmarks.
arXiv Detail & Related papers (2024-02-02T13:14:31Z) - Automatic Unit Test Data Generation and Actor-Critic Reinforcement
Learning for Code Synthesis [16.88062487980405]
We present a novel approach to automatically obtain data consisting of function signatures and associated Unit Tests.
We show that it, in conjunction with automatically generated training data, leads to improvement of a pre-trained code language model's performance.
arXiv Detail & Related papers (2023-10-20T17:13:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.