Beyond Imitation: Recovering Dense Rewards from Demonstrations
- URL: http://arxiv.org/abs/2510.02493v1
- Date: Thu, 02 Oct 2025 18:58:26 GMT
- Title: Beyond Imitation: Recovering Dense Rewards from Demonstrations
- Authors: Jiangnan Li, Thuy-Trang Vu, Ehsan Abbasnejad, Gholamreza Haffari,
- Abstract summary: supervised fine-tuning is treated as a simple imitation learning process that only trains a policy to imitate expert behavior on datasets.<n>We prove that the SFT process does not just learn a policy, but also an implicit, dense, token-level reward model that explains the expert demonstrations.<n>Dense-Path REINFORCE consistently outperforms the original SFT models on instruction-following benchmarks.
- Score: 64.05543657441218
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Conventionally, supervised fine-tuning (SFT) is treated as a simple imitation learning process that only trains a policy to imitate expert behavior on demonstration datasets. In this work, we challenge this view by establishing a fundamental equivalence between SFT and Inverse Reinforcement Learning. We prove that the SFT objective is a special case of Inverse Q-Learning, which implies that the SFT process does not just learn a policy, but also an implicit, dense, token-level reward model that explains the expert demonstrations. We then show how to recover this dense reward signal directly from the SFT model by formulating a baseline-relative reward function. The availability of such a dense reward model offers numerous benefits, providing granular credit assignment for each token generated. We demonstrate one key application by using these recovered rewards to further improve the policy with reinforcement learning. Our method, Dense-Path REINFORCE, consistently outperforms the original SFT models on instruction-following benchmarks. This work reframes SFT not merely as policy imitation but as a powerful reward learning mechanism, opening new possibilities for leveraging expert demonstrations.
Related papers
- Discovering Process-Outcome Credit in Multi-Step LLM Reasoning [3.584086358722852]
Reinforcement Learning (RL) serves as a potent paradigm for enhancing reasoning capabilities in Large Language Models (LLMs)<n>We propose a novel framework designed to provide continuous reward signals.<n>Our model exhibits superior out-of-distribution robustness, demonstrating promising zero-shot transfer capabilities to unseen and challenging reasoning tasks.
arXiv Detail & Related papers (2026-02-01T05:44:09Z) - Online SFT for LLM Reasoning: Surprising Effectiveness of Self-Tuning without Rewards [24.382221008037188]
We present a self-help online supervised finetuning (OSFT) paradigm for LLM reasoning.<n>OSFT is a highly efficient training strategy for LLM reasoning, as it is reward-free and uses just one rollout by default.<n>We believe that OSFT offers an efficient and promising alternative to more complex, reward-based training paradigms.
arXiv Detail & Related papers (2025-10-21T17:15:56Z) - On the Generalization of SFT: A Reinforcement Learning Perspective with Reward Rectification [50.30835290642069]
We present a simple yet theoretically motivated improvement to Supervised Fine-Tuning (SFT) for the Large Language Model (LLM)<n>We reveal that standard SFT gradients implicitly encode a problematic reward structure that may severely restrict the generalization capabilities of model.<n>We propose Dynamic Fine-Tuning (DFT), stabilizing gradient updates for each token by dynamically rescaling the objective function with the probability of this token.
arXiv Detail & Related papers (2025-08-07T17:59:04Z) - From Novelty to Imitation: Self-Distilled Rewards for Offline Reinforcement Learning [7.559920170287638]
Offline Reinforcement Learning (RL) aims to learn effective policies from a static dataset without requiring further agent-environment interactions.<n>We propose ReLOAD, a novel reward annotation framework for offline RL.<n>Our approach adapts Random Network Distillation (RND) to generate intrinsic rewards from expert demonstrations.
arXiv Detail & Related papers (2025-07-17T06:16:06Z) - Blending Supervised and Reinforcement Fine-Tuning with Prefix Sampling [43.835234728790795]
Prefix-RFT is a hybrid approach that synergizes learning from both demonstration and exploration.<n>It not only surpasses the performance of standalone SFT and RFT but also outperforms parallel mixed-policy RFT methods.
arXiv Detail & Related papers (2025-07-02T13:04:09Z) - Implicit Reward as the Bridge: A Unified View of SFT and DPO Connections [65.36449542323277]
We present a unified theoretical framework bridgingSupervised Fine-Tuning (SFT) and preference learning in Large Language Model (LLM) post-training.<n>We propose a simple yet effective learning rate reduction approach that yields significant performance improvements.
arXiv Detail & Related papers (2025-06-15T05:42:29Z) - Distill Not Only Data but Also Rewards: Can Smaller Language Models Surpass Larger Ones? [58.80794196076336]
Distilling large language models (LLMs) typically involves transferring the teacher model's responses through supervised fine-tuning (SFT)<n>We propose a novel distillation pipeline that transfers both responses and rewards.<n>Our method generates pseudo-rewards through a self-supervised mechanism that leverages the inherent structure of both teacher and student responses.
arXiv Detail & Related papers (2025-02-26T20:50:11Z) - Getting More Juice Out of the SFT Data: Reward Learning from Human Demonstration Improves SFT for LLM Alignment [65.15914284008973]
We propose to leverage an Inverse Reinforcement Learning (IRL) technique to simultaneously build an reward model and a policy model.
We show that the proposed algorithms converge to the stationary solutions of the IRL problem.
Our results indicate that it is beneficial to leverage reward learning throughout the entire alignment process.
arXiv Detail & Related papers (2024-05-28T07:11:05Z) - Dense Reward for Free in Reinforcement Learning from Human Feedback [64.92448888346125]
We leverage the fact that the reward model contains more information than just its scalar output.
We use these attention weights to redistribute the reward along the whole completion.
Empirically, we show that it stabilises training, accelerates the rate of learning, and, in practical cases, may lead to better local optima.
arXiv Detail & Related papers (2024-02-01T17:10:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.