Beyond Output Critique: Self-Correction via Task Distillation
- URL: http://arxiv.org/abs/2602.00871v1
- Date: Sat, 31 Jan 2026 19:15:41 GMT
- Title: Beyond Output Critique: Self-Correction via Task Distillation
- Authors: Hossein A. Rahmani, Mengting Wan, Pei Zhou, Longqi Yang, Nick Craswell, Emine Yilmaz, Sujay Kumar Jauhar,
- Abstract summary: We propose a framework that introduces an intermediate step of task abstraction before solution refinement.<n>Given an input and an initial response, the model first distills the task into a structured template that captures key variables, constraints, and problem structure.<n>This abstraction then guides solution instantiation, grounding subsequent responses in a clearer understanding of the task.
- Score: 36.44752912823049
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large language models (LLMs) have shown promising self-correction abilities, where iterative refinement improves the quality of generated responses. However, most existing approaches operate at the level of output critique, patching surface errors while often failing to correct deeper reasoning flaws. We propose SELF-THOUGHT, a framework that introduces an intermediate step of task abstraction before solution refinement. Given an input and an initial response, the model first distills the task into a structured template that captures key variables, constraints, and problem structure. This abstraction then guides solution instantiation, grounding subsequent responses in a clearer understanding of the task and reducing error propagation. Crucially, we show that these abstractions can be transferred across models: templates generated by larger models can serve as structured guides for smaller LLMs, which typically struggle with intrinsic self-correction. By reusing distilled task structures, smaller models achieve more reliable refinements without heavy fine-tuning or reliance on external verifiers. Experiments across diverse reasoning tasks demonstrate that SELF-THOUGHT improves accuracy, robustness, and generalization for both large and small models, offering a scalable path toward more reliable self-correcting language systems.
Related papers
- CRAFT: Continuous Reasoning and Agentic Feedback Tuning for Multimodal Text-to-Image Generation [0.0]
CRAFT (Continuous Reasoning and Agentic Feedback Tuning) is a training-free, model-agnostic framework that brings structured reasoning paradigm to multimodal image generation.<n>It consistently improves compositional accuracy, text rendering, and preference-based evaluations.<n>These improvements incur only a negligible inference-time overhead, allowing smaller or cheaper models to approach the quality of substantially more expensive systems.
arXiv Detail & Related papers (2025-12-23T13:44:41Z) - When Words Change the Model: Sensitivity of LLMs for Constraint Programming Modelling [1.052782170493037]
Large language models show impressive results in automatically generating models for classical benchmarks.<n>Many standard CP problems are likely included in the training data of these models.<n>We show that while LLMs can produce syntactically valid and semantically plausible models, their performance drops sharply under contextual and linguistic variation.
arXiv Detail & Related papers (2025-11-18T10:40:32Z) - From Denoising to Refining: A Corrective Framework for Vision-Language Diffusion Model [72.73512218682187]
We introduce ReDiff, a refining-enhanced diffusion framework that teaches the model to identify and correct its own errors.<n>Our approach features a two-stage training process: first, we instill a foundational revision capability by training the model to revise synthetic errors; second, we implement a novel online self-correction loop.<n>This mistake-driven learning endows the model with the crucial ability to revisit and refine its already generated output, effectively breaking the error cascade.
arXiv Detail & Related papers (2025-10-22T06:58:55Z) - Self-Consistency as a Free Lunch: Reducing Hallucinations in Vision-Language Models via Self-Reflection [71.8243083897721]
Vision-language models often hallucinate details, generating non-existent objects or inaccurate attributes that compromise output reliability.<n>We present a novel framework that leverages the model's self-consistency between long responses and short answers to generate preference pairs for training.
arXiv Detail & Related papers (2025-09-27T10:37:11Z) - Less is More Tokens: Efficient Math Reasoning via Difficulty-Aware Chain-of-Thought Distillation [82.2288581878096]
We present a framework for difficulty-aware reasoning that teaches models to dynamically adjust reasoning depth based on problem complexity.<n>We show that models can be endowed with such dynamic inference pathways without any architectural modifications.
arXiv Detail & Related papers (2025-09-05T16:40:13Z) - Language Models can perform Single-Utterance Self-Correction of Perturbed Reasoning [4.768151813962547]
Large Language Models (LLMs) have demonstrated impressive mathematical reasoning capabilities.<n>Their performance remains brittle to minor variations in problem description and prompting strategy.<n>To better understand self-correction capabilities of recent models, we conduct experiments measuring models' ability to self-correct synthetics.
arXiv Detail & Related papers (2025-06-18T21:35:44Z) - PAG: Multi-Turn Reinforced LLM Self-Correction with Policy as Generative Verifier [18.771754895027616]
Policy as Generative Verifier (PAG) is a framework that empowers Large Language Models to self-correct by alternating between policy and verifier roles.<n>It alleviates model collapse and jointly enhances both reasoning and verification abilities.
arXiv Detail & Related papers (2025-06-12T06:59:35Z) - Well Begun is Half Done: Low-resource Preference Alignment by Weak-to-Strong Decoding [26.416630784362525]
Large Language Models (LLMs) require alignment with human preferences to avoid generating offensive, false, or meaningless content.<n>We propose a novel framework, Weak-to-Strong Decoding (WSD), to enhance the alignment ability of base models.<n>We also collect a new dataset, GenAligner, to fine-tune a small-sized Pilot-3B as the draft model.
arXiv Detail & Related papers (2025-06-09T05:21:22Z) - Self-Improvement in Language Models: The Sharpening Mechanism [70.9248553790022]
We offer a new perspective on the capabilities of self-improvement through a lens we refer to as sharpening.<n>Motivated by the observation that language models are often better at verifying response quality than they are at generating correct responses, we formalize self-improvement as using the model itself as a verifier during post-training.<n>We analyze two natural families of self-improvement algorithms based on SFT and RLHF.
arXiv Detail & Related papers (2024-12-02T20:24:17Z) - Small Language Models Need Strong Verifiers to Self-Correct Reasoning [69.94251699982388]
Self-correction has emerged as a promising solution to boost the reasoning performance of large language models (LLMs)
This work explores whether small (= 13B) language models (LMs) have the ability of self-correction on reasoning tasks with minimal inputs from stronger LMs.
arXiv Detail & Related papers (2024-04-26T03:41:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.