Patch the Distribution Mismatch: RL Rewriting Agent for Stable Off-Policy SFT
- URL: http://arxiv.org/abs/2602.11220v1
- Date: Wed, 11 Feb 2026 11:51:37 GMT
- Title: Patch the Distribution Mismatch: RL Rewriting Agent for Stable Off-Policy SFT
- Authors: Jiacheng Wang, Ping Jian, Zhen Yang, Zirong Chen, Keren Liao, Zhongbin Guo,
- Abstract summary: We propose a data-centric approach that rewrites downstream training data prior to supervised fine-tuning (SFT)<n>We learn a rewriting policy that better matches the backbone's QA-style generation distribution while preserving diversity.<n>Our method achieves downstream gains comparable to standard SFT while reducing forgetting on non-downstream benchmarks by 12.34% on average.
- Score: 13.387535599778305
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large language models (LLMs) have made rapid progress, yet adapting them to downstream scenarios still commonly relies on supervised fine-tuning (SFT). When downstream data exhibit a substantial distribution shift from the model's prior training distribution, SFT can induce catastrophic forgetting. To narrow this gap, data rewriting has been proposed as a data-centric approach that rewrites downstream training data prior to SFT. However, existing methods typically sample rewrites from a prompt-induced conditional distribution, so the resulting targets are not necessarily aligned with the model's natural QA-style generation distribution. Moreover, reliance on fixed templates can lead to diversity collapse. To address these issues, we cast data rewriting as a policy learning problem and learn a rewriting policy that better matches the backbone's QA-style generation distribution while preserving diversity. Since distributional alignment, diversity and task consistency are automatically evaluable but difficult to optimize end-to-end with differentiable objectives, we leverage reinforcement learning to optimize the rewrite distribution under reward feedback and propose an RL-based data-rewriting agent. The agent jointly optimizes QA-style distributional alignment and diversity under a hard task-consistency gate, thereby constructing a higher-quality rewritten dataset for downstream SFT. Extensive experiments show that our method achieves downstream gains comparable to standard SFT while reducing forgetting on non-downstream benchmarks by 12.34% on average. Our code is available at https://anonymous.4open.science/r/Patch-the-Prompt-Gap-4112 .
Related papers
- SimGR: Escaping the Pitfalls of Generative Decoding in LLM-based Recommendation [68.00727783181289]
A core objective in recommender systems is to accurately model the distribution of user preferences over items to enable personalized recommendations.<n>We observe that existing methods inevitably introduce systematic bias when estimating item-level preference distributions.<n>We propose textbfSimply textbfGenerative textbfRecommendation (textbfSimGR), a framework that directly models item-level preference distributions in a shared latent space.
arXiv Detail & Related papers (2026-02-08T07:26:52Z) - Utility-Diversity Aware Online Batch Selection for LLM Supervised Fine-tuning [49.04912820721943]
Supervised fine-tuning (SFT) is computationally expensive and sometimes suffers from overfitting or bias amplification.<n>This work studies the online batch selection family that dynamically scores and filters samples during the training process.<n>We develop textbfUDS (Utility-Diversity Sampling), a framework for efficient online batch selection in SFT.
arXiv Detail & Related papers (2025-10-19T15:32:01Z) - Mind the Gap: Data Rewriting for Stable Off-Policy Supervised Fine-Tuning [33.899779762210976]
Supervised fine-tuning (SFT) of large language models can be viewed as an off-policy learning problem.<n>Existing methods mitigate this issue with KL penalties or clipping, which passively updates rather than actively reducing the gap.<n>We propose a simple yet effective data rewriting framework that proactively shrinks the policy gap before training.
arXiv Detail & Related papers (2025-09-18T17:02:30Z) - Backpropagation-Free Test-Time Adaptation via Probabilistic Gaussian Alignment [16.352863226512984]
Test-time adaptation (TTA) enhances the zero-shot robustness under distribution shifts by leveraging unlabeled test data during inference.<n>Most methods rely on backpropagation or iterative optimization, which limits scalability and hinders real-time deployment.<n>We propose ADAPT, an Advanced Distribution-Aware and back propagation-free Test-time adaptation method.
arXiv Detail & Related papers (2025-08-21T13:42:49Z) - Asymmetric Co-Training for Source-Free Few-Shot Domain Adaptation [5.611768906855499]
We propose an asymmetric co-training (ACT) method specifically designed for the SFFSDA scenario.<n>We use a two-step optimization process to train the target model.<n>Our findings suggest that adapting a source pre-trained model using only a small amount of labeled target data offers a practical and dependable solution.
arXiv Detail & Related papers (2025-02-20T02:58:45Z) - Step-wise Distribution Alignment Guided Style Prompt Tuning for Source-free Cross-domain Few-shot Learning [53.77707279483278]
Cross-domain few-shot learning methods face challenges with large-scale pre-trained models due to inaccessible source data and training strategies.<n>This paper introduces Step-wise Distribution Alignment Guided Style Prompt Tuning (StepSPT)<n>StepSPT implicitly narrows domain gaps through prediction distribution optimization.
arXiv Detail & Related papers (2024-11-15T09:34:07Z) - Distribution Alignment for Fully Test-Time Adaptation with Dynamic Online Data Streams [19.921480334048756]
Test-Time Adaptation (TTA) enables adaptation and inference in test data streams with domain shifts from the source.
We propose a novel Distribution Alignment loss for TTA.
We surpass existing methods in non-i.i.d. scenarios and maintain competitive performance under the ideal i.i.d. assumption.
arXiv Detail & Related papers (2024-07-16T19:33:23Z) - CAFA: Class-Aware Feature Alignment for Test-Time Adaptation [50.26963784271912]
Test-time adaptation (TTA) aims to address this challenge by adapting a model to unlabeled data at test time.
We propose a simple yet effective feature alignment loss, termed as Class-Aware Feature Alignment (CAFA), which simultaneously encourages a model to learn target representations in a class-discriminative manner.
arXiv Detail & Related papers (2022-06-01T03:02:07Z) - Test-time Batch Statistics Calibration for Covariate Shift [66.7044675981449]
We propose to adapt the deep models to the novel environment during inference.
We present a general formulation $alpha$-BN to calibrate the batch statistics.
We also present a novel loss function to form a unified test time adaptation framework Core.
arXiv Detail & Related papers (2021-10-06T08:45:03Z) - Pre-training Is (Almost) All You Need: An Application to Commonsense
Reasoning [61.32992639292889]
Fine-tuning of pre-trained transformer models has become the standard approach for solving common NLP tasks.
We introduce a new scoring method that casts a plausibility ranking task in a full-text format.
We show that our method provides a much more stable training phase across random restarts.
arXiv Detail & Related papers (2020-04-29T10:54:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.