Continuous Chain of Thought Enables Parallel Exploration and Reasoning
- URL: http://arxiv.org/abs/2505.23648v1
- Date: Thu, 29 May 2025 16:58:28 GMT
- Title: Continuous Chain of Thought Enables Parallel Exploration and Reasoning
- Authors: Halil Alperen Gozeten, M. Emrullah Ildiz, Xuechen Zhang, Hrayr Harutyunyan, Ankit Singh Rawat, Samet Oymak,
- Abstract summary: Current language models generate chain-of-thought traces by autoregressively sampling tokens from a finite vocabulary.<n>Our work examines the benefits of continuously-valued tokens (CoT2) through logical reasoning tasks.<n>We show that CoT2 allows the model to track multiple traces in parallel and quantify its benefits for inference efficiency.
- Score: 38.59659461841282
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Current language models generate chain-of-thought traces by autoregressively sampling tokens from a finite vocabulary. While this discrete sampling has achieved remarkable success, conducting chain-of-thought with continuously-valued tokens (CoT2) offers a richer and more expressive alternative. Our work examines the benefits of CoT2 through logical reasoning tasks that inherently require search capabilities and provide optimization and exploration methods for CoT2. Theoretically, we show that CoT2 allows the model to track multiple traces in parallel and quantify its benefits for inference efficiency. Notably, one layer transformer equipped with CoT2 can provably solve the combinatorial "subset sum problem" given sufficient embedding dimension. These insights lead to a novel and effective supervision strategy where we match the softmax outputs to the empirical token distributions of a set of target traces. Complementing this, we introduce sampling strategies that unlock policy optimization and self-improvement for CoT2. Our first strategy samples and composes $K$ discrete tokens at each decoding step to control the level of parallelism, and reduces to standard CoT when $K=1$. Our second strategy relies on continuous exploration over the probability simplex. Experiments confirm that policy optimization with CoT2 indeed improves the performance of the model beyond its initial discrete or continuous supervision.
Related papers
- S$^4$C: Speculative Sampling with Syntactic and Semantic Coherence for Efficient Inference of Large Language Models [38.784951111677856]
Large language models (LLMs) exhibit remarkable reasoning capabilities across diverse downstream tasks.<n>Their autoregressive nature leads to substantial latency inference, posing challenges for real-time applications.<n>We propose a Speculative Sampling with Syntactic and Semantic Coherence framework, which extends speculative sampling by leveraging multi-head drafting.
arXiv Detail & Related papers (2025-06-17T03:38:19Z) - Token Signature: Predicting Chain-of-Thought Gains with Token Decoding Feature in Large Language Models [9.282278040339138]
Chain-of-Thought (CoT) technique has proven effective in improving the performance of large language models (LLMs) on complex reasoning tasks.<n>We make a preliminary observation that the monotonicity of token probability distributions may be correlated with the gains achieved through CoT reasoning.<n>We propose two indicators based on the token probability distribution to assess CoT effectiveness across different tasks.
arXiv Detail & Related papers (2025-06-06T11:53:27Z) - Stochastic Primal-Dual Double Block-Coordinate for Two-way Partial AUC Maximization [56.805574957824135]
Two-way partial AUCAUC is a critical performance metric for binary classification with imbalanced data.<n>Existing algorithms for TPAUC optimization remain under-explored.<n>We introduce two innovative double-coordinate block-coordinate algorithms for TPAUC optimization.
arXiv Detail & Related papers (2025-05-28T03:55:05Z) - Fractured Chain-of-Thought Reasoning [61.647243580650446]
We introduce Fractured Sampling, a unified inference-time strategy that interpolates between full CoT and solution-only sampling.<n>We show that Fractured Sampling consistently achieves superior accuracy-cost trade-offs, yielding steep log-linear scaling gains in Pass@k versus token budget.
arXiv Detail & Related papers (2025-05-19T11:30:41Z) - Reasoning by Superposition: A Theoretical Perspective on Chain of Continuous Thought [56.71873693264532]
We prove that a two-layer transformer with $D$ steps of continuous CoTs can solve the directed graph reachability problem.<n>In our construction, each continuous thought vector is a superposition state that encodes multiple search frontiers simultaneously.
arXiv Detail & Related papers (2025-05-18T18:36:53Z) - T2I-R1: Reinforcing Image Generation with Collaborative Semantic-level and Token-level CoT [73.10972809774039]
We present T2I-R1, a novel reasoning-enhanced text-to-image generation model powered by reinforcement learning.<n>By applying our reasoning strategies to the baseline model, Janus-Pro, we achieve superior performance with 13% improvement on T2I-CompBench and 19% improvement on the WISE benchmark.
arXiv Detail & Related papers (2025-05-01T17:59:46Z) - To Backtrack or Not to Backtrack: When Sequential Search Limits Model Reasoning [31.21491548356213]
Backtracking naturally scales test-time compute by enabling sequential, linearized exploration via long chain-of-thought (CoT) generation.<n>Despite the growing adoption of sequential search, its advantages over parallel sampling remain poorly understood.<n>We show that models with backtracking capabilities benefit significantly from RL fine-tuning, while models without backtracking see limited, mixed gains.
arXiv Detail & Related papers (2025-04-09T17:12:49Z) - Expediting and Elevating Large Language Model Reasoning via Hidden Chain-of-Thought Decoding [14.175444025026508]
Large language models (LLMs) have demonstrated remarkable capabilities in tasks requiring chain-of-thought (CoT) prompting.
generating the full CoT process results in significantly longer output sequences, leading to increased computational costs and latency during inference.
We propose a novel approach to compress the CoT process through semantic alignment, enabling more efficient decoding while preserving the benefits of CoT reasoning.
arXiv Detail & Related papers (2024-09-13T06:29:20Z) - ChainLM: Empowering Large Language Models with Improved Chain-of-Thought Prompting [124.69672273754144]
Chain-of-Thought (CoT) prompting can enhance the reasoning capabilities of large language models (LLMs)
Existing CoT approaches usually focus on simpler reasoning tasks and thus result in low-quality and inconsistent CoT prompts.
We introduce CoTGenius, a novel framework designed for the automatic generation of superior CoT prompts.
arXiv Detail & Related papers (2024-03-21T11:34:26Z) - Automatic Prompt Augmentation and Selection with Chain-of-Thought from
Labeled Data [20.68548644283721]
Chain-of-thought (CoT) advances the reasoning abilities of large language models (LLMs)
Most CoT studies rely on carefully designed human-annotated rational chains to prompt LLMs.
This paper proposes a new strategy that can bypass human engineering of CoT.
arXiv Detail & Related papers (2023-02-24T18:58:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.