SpiralThinker: Latent Reasoning through an Iterative Process with Text-Latent Interleaving
- URL: http://arxiv.org/abs/2511.08983v1
- Date: Thu, 13 Nov 2025 01:23:39 GMT
- Title: SpiralThinker: Latent Reasoning through an Iterative Process with Text-Latent Interleaving
- Authors: Shengmin Piao, Sanghyun Park,
- Abstract summary: SpiralThinker is a unified framework that performs iterative updates over latent representations.<n>A progressive alignment objective combined with structured annotations maintains coherence between latent and textual reasoning.
- Score: 4.732347368043908
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advances in large reasoning models have been driven by reinforcement learning and test-time scaling, accompanied by growing interest in latent rather than purely textual reasoning. However, existing latent reasoning methods lack mechanisms to ensure stable evolution of latent representations and a systematic way to interleave implicit and explicit reasoning. We introduce SpiralThinker, a unified framework that performs iterative updates over latent representations, enabling extended implicit reasoning without generating additional tokens. A progressive alignment objective combined with structured annotations maintains coherence between latent and textual reasoning. Across mathematical, logical, and commonsense reasoning tasks, SpiralThinker achieves the best overall performance among latent reasoning approaches, consistently surpassing previous methods across all benchmarks. Detailed analyses reveal that both iteration and alignment are indispensable, the numbers of latent tokens and iterations exhibit dataset-specific optima, and appropriate alignment proves critical for an effective iterative process. Overall, SpiralThinker bridges iterative computation and latent reasoning, demonstrating that aligned iterative updates can reliably steer reasoning in the latent space.
Related papers
- LaSER: Internalizing Explicit Reasoning into Latent Space for Dense Retrieval [74.72139580745511]
LaSER is a novel self-distillation framework that internalizes explicit reasoning into the latent space of retrievers.<n>Our method successfully combines the reasoning depth of explicit CoT pipelines with the inference efficiency of standard dense retrievers.
arXiv Detail & Related papers (2026-03-02T04:11:18Z) - Parallel Latent Reasoning for Sequential Recommendation [23.624137982116867]
We propose PLR, a novel framework for exploring multiple diverse reasoning trajectories simultaneously.<n>PLR constructs parallel reasoning streams through learnable trigger tokens in continuous latent space.<n>Experiments on three real-world datasets demonstrate that PLR substantially outperforms state-of-the-art baselines.
arXiv Detail & Related papers (2026-01-06T16:25:48Z) - Think Consistently, Reason Efficiently: Energy-Based Calibration for Implicit Chain-of-Thought [33.267497114389734]
Large Language Models (LLMs) have demonstrated strong reasoning capabilities through emphChain-of-Thought (CoT) prompting.<n>CoT methods rely on discrete token-level reasoning processes prone to error propagation and limited by vocabulary.<n>We propose EBM-CoT, an Energy-Based Chain-of-Thought framework that refines latent thought representations through an energy-based model.
arXiv Detail & Related papers (2025-11-10T14:10:58Z) - A Survey on Parallel Reasoning [58.66122129692264]
We first present a formal definition of parallel reasoning and clarify its distinction from related concepts like Chain-of-Thought.<n>We then organize and discuss advanced techniques based on a novel taxonomy, including non-interactive reasoning, interactive reasoning, and efficiency-focused decoding strategies.<n>We highlight the core challenges of parallel reasoning and suggest potential directions for future research.
arXiv Detail & Related papers (2025-10-14T05:42:19Z) - LaDiR: Latent Diffusion Enhances LLMs for Text Reasoning [30.62691333490551]
Large Language Models (LLMs) demonstrate their reasoning ability through chain-of-thought generation.<n>We propose LaDiR, a novel reasoning framework that unifies the expressiveness of continuous latent representation.<n>LaDiR consistently improves accuracy, diversity, and interpretability over existing autoregressive, diffusion-based, and latent reasoning methods.
arXiv Detail & Related papers (2025-10-06T08:15:03Z) - Implicit Reasoning in Large Language Models: A Comprehensive Survey [67.53966514728383]
Large Language Models (LLMs) have demonstrated strong generalization across a wide range of tasks.<n>Recent studies have shifted attention from explicit chain-of-thought prompting toward implicit reasoning.<n>This survey introduces a taxonomy centered on execution paradigms, shifting the focus from representational forms to computational strategies.
arXiv Detail & Related papers (2025-09-02T14:16:02Z) - A Survey on Latent Reasoning [100.54120559169735]
Large Language Models (LLMs) have demonstrated impressive reasoning capabilities.<n>CoT reasoning that verbalizes intermediate steps limits the model's expressive bandwidth.<n>Latent reasoning tackles this bottleneck by performing multi-step inference entirely in the model's continuous hidden state.
arXiv Detail & Related papers (2025-07-08T17:29:07Z) - ConciseHint: Boosting Efficient Reasoning via Continuous Concise Hints during Generation [74.37307916314407]
We propose a framework dubbed ConciseHint, which continuously encourages the reasoning model to speak concisely.<n>Experiments on the state-of-the-art LRMs, including DeepSeek-R1 and Qwen-3 series, demonstrate that our method can effectively produce concise reasoning.
arXiv Detail & Related papers (2025-06-23T16:20:44Z) - Rationale-Augmented Ensembles in Language Models [53.45015291520658]
We reconsider rationale-augmented prompting for few-shot in-context learning.
We identify rationale sampling in the output space as the key component to robustly improve performance.
We demonstrate that rationale-augmented ensembles achieve more accurate and interpretable results than existing prompting approaches.
arXiv Detail & Related papers (2022-07-02T06:20:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.