Towards a Mechanistic Understanding of Large Reasoning Models: A Survey of Training, Inference, and Failures
- URL: http://arxiv.org/abs/2601.19928v1
- Date: Sun, 11 Jan 2026 08:48:46 GMT
- Title: Towards a Mechanistic Understanding of Large Reasoning Models: A Survey of Training, Inference, and Failures
- Authors: Yi Hu, Jiaqi Gu, Ruxin Wang, Zijun Yao, Hao Peng, Xiaobao Wu, Jianhui Chen, Muhan Zhang, Liangming Pan,
- Abstract summary: Large Reasoning Models (LRMs) have pushed reasoning capabilities to new heights.<n>This paper organizes recent findings into three core dimensions: 1) training dynamics, 2) reasoning mechanisms, and 3) unintended behaviors.
- Score: 72.27391760972445
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reinforcement learning (RL) has catalyzed the emergence of Large Reasoning Models (LRMs) that have pushed reasoning capabilities to new heights. While their performance has garnered significant excitement, exploring the internal mechanisms driving these behaviors has become an equally critical research frontier. This paper provides a comprehensive survey of the mechanistic understanding of LRMs, organizing recent findings into three core dimensions: 1) training dynamics, 2) reasoning mechanisms, and 3) unintended behaviors. By synthesizing these insights, we aim to bridge the gap between black-box performance and mechanistic transparency. Finally, we discuss under-explored challenges to outline a roadmap for future mechanistic studies, including the need for applied interpretability, improved methodologies, and a unified theoretical framework.
Related papers
- To Think or Not To Think, That is The Question for Large Reasoning Models in Theory of Mind Tasks [56.11584171938381]
Theory of Mind (ToM) assesses whether models can infer hidden mental states such as beliefs, desires, and intentions.<n>Recent progress in Large Reasoning Models (LRMs) has boosted step-by-step inference in mathematics and coding.<n>We present a systematic study of nine advanced Large Language Models (LLMs) comparing reasoning models with non-reasoning models.
arXiv Detail & Related papers (2026-02-11T08:16:13Z) - Reasoning Promotes Robustness in Theory of Mind Tasks [0.26945563448932225]
Large language models (LLMs) have recently shown strong performance on Theory of Mind (ToM) tests.<n>This paper examines the behavior of such reasoning models in ToM tasks using novel adaptations of machine psychological experiments and results from established benchmarks.
arXiv Detail & Related papers (2026-01-23T16:01:24Z) - Mechanistic Interpretability for Large Language Model Alignment: Progress, Challenges, and Future Directions [16.821238326410324]
Large language models (LLMs) have achieved remarkable capabilities across diverse tasks, yet their internal decision-making processes remain largely opaque.<n>Mechanistic interpretability has emerged as a critical research direction for understanding and aligning these models.<n>We analyze how interpretability insights have informed alignment strategies including reinforcement learning from human feedback, constitutional AI, and scalable oversight.
arXiv Detail & Related papers (2026-01-21T11:43:57Z) - Beyond the Black Box: Theory and Mechanism of Large Language Models [39.10631426330405]
The rapid emergence of Large Language Models (LLMs) has precipitated a profound paradigm shift in Artificial Intelligence.<n>This survey proposes a unified lifecycle-based taxonomy that organizes the research landscape into six distinct stages: Data Preparation, Model Preparation, Training, Alignment, Inference, and Evaluation.
arXiv Detail & Related papers (2026-01-06T10:45:53Z) - Beyond Description: Cognitively Benchmarking Fine-Grained Action for Embodied Agents [52.14392337070763]
We introduce CFG-Bench, a new benchmark designed to systematically evaluate fine-grained action intelligence.<n>CFG-Bench consists of 1,368 curated videos paired with 19,562 three-modalities question-answer pairs targeting four cognitive abilities.<n>Our comprehensive evaluation on CFG-Bench reveals that leading MLLMs struggle to produce detailed instructions for physical interactions.
arXiv Detail & Related papers (2025-11-24T02:02:29Z) - A Survey on Large Language Models for Mathematical Reasoning [13.627895103752783]
This survey examines the development of mathematical reasoning abilities in large language models (LLMs)<n>We review methods for enhancing mathematical reasoning, ranging from training-free prompting to fine-tuning approaches such as supervised fine-tuning and reinforcement learning.<n>Despite notable progress, fundamental challenges remain in terms of capacity, efficiency, and generalization.
arXiv Detail & Related papers (2025-06-10T04:44:28Z) - Topological Structure Learning Should Be A Research Priority for LLM-Based Multi-Agent Systems [69.95482609893236]
Large Language Model-based Multi-Agent Systems (MASs) have emerged as a powerful paradigm for tackling complex tasks through collaborative intelligence.<n>We call for a paradigm shift toward emphtopology-aware MASs that explicitly model and dynamically optimize the structure of inter-agent interactions.
arXiv Detail & Related papers (2025-05-28T15:20:09Z) - A Survey of Efficient Reasoning for Large Reasoning Models: Language, Multimodality, and Beyond [88.5807076505261]
Large Reasoning Models (LRMs) have demonstrated strong performance gains by scaling up the length of Chain-of-Thought (CoT) reasoning during inference.<n>A growing concern lies in their tendency to produce excessively long reasoning traces.<n>This inefficiency introduces significant challenges for training, inference, and real-world deployment.
arXiv Detail & Related papers (2025-03-27T15:36:30Z) - Trade-offs in Large Reasoning Models: An Empirical Analysis of Deliberative and Adaptive Reasoning over Foundational Capabilities [101.77467538102924]
Recent advancements in Large Reasoning Models (LRMs) have demonstrated remarkable performance in specialized reasoning tasks.<n>We show that acquiring deliberative reasoning capabilities significantly reduces the foundational capabilities of LRMs.<n>We demonstrate that adaptive reasoning -- employing modes like Zero-Thinking, Less-Thinking, and Summary-Thinking -- can effectively alleviate these drawbacks.
arXiv Detail & Related papers (2025-03-23T08:18:51Z) - LogiDynamics: Unraveling the Dynamics of Inductive, Abductive and Deductive Logical Inferences in LLM Reasoning [74.0242521818214]
This paper systematically investigates the comparative dynamics of inductive (System 1) versus abductive/deductive (System 2) inference in large language models (LLMs)<n>We utilize a controlled analogical reasoning environment, varying modality (textual, visual, symbolic), difficulty, and task format (MCQ / free-text)<n>Our analysis reveals System 2 pipelines generally excel, particularly in visual/symbolic modalities and harder tasks, while System 1 is competitive for textual and easier problems.
arXiv Detail & Related papers (2025-02-16T15:54:53Z) - Attention Heads of Large Language Models: A Survey [10.136767972375639]
We aim to demystify the internal reasoning processes of Large Language Models (LLMs) by systematically exploring the roles and mechanisms of attention heads.<n>We first introduce a novel four-stage framework inspired by the human thought process: Knowledge Recalling, In-Context Identification, Latent Reasoning, and Expression Preparation.<n>We analyze the experimental methodologies used to discover these special heads, dividing them into two categories: Modeling-Free and Modeling-Required methods.
arXiv Detail & Related papers (2024-09-05T17:59:12Z) - Understanding the Language Model to Solve the Symbolic Multi-Step Reasoning Problem from the Perspective of Buffer Mechanism [68.05754701230039]
We construct a symbolic multi-step reasoning task to investigate the information propagation mechanisms in Transformer models.<n>We propose a random matrix-based algorithm to enhance the model's reasoning ability.
arXiv Detail & Related papers (2024-05-24T07:41:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.