A Survey of Inductive Reasoning for Large Language Models
- URL: http://arxiv.org/abs/2510.10182v1
- Date: Sat, 11 Oct 2025 11:45:38 GMT
- Title: A Survey of Inductive Reasoning for Large Language Models
- Authors: Kedi Chen, Dezhao Ruan, Yuhao Dan, Yaoting Wang, Siyu Yan, Xuecheng Wu, Yinqi Zhang, Qin Chen, Jie Zhou, Liang He, Biqing Qi, Linyang Li, Qipeng Guo, Xiaoming Shi, Wei Zhang,
- Abstract summary: The inductive mode is crucial for knowledge generalization and aligns better with human cognition.<n>Despite the importance of inductive reasoning, there is no systematic summary of it.<n>This paper presents the first comprehensive survey of inductive reasoning for large language models.
- Score: 55.23215679173251
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Reasoning is an important task for large language models (LLMs). Among all the reasoning paradigms, inductive reasoning is one of the fundamental types, which is characterized by its particular-to-general thinking process and the non-uniqueness of its answers. The inductive mode is crucial for knowledge generalization and aligns better with human cognition, so it is a fundamental mode of learning, hence attracting increasing interest. Despite the importance of inductive reasoning, there is no systematic summary of it. Therefore, this paper presents the first comprehensive survey of inductive reasoning for LLMs. First, methods for improving inductive reasoning are categorized into three main areas: post-training, test-time scaling, and data augmentation. Then, current benchmarks of inductive reasoning are summarized, and a unified sandbox-based evaluation approach with the observation coverage metric is derived. Finally, we offer some analyses regarding the source of inductive ability and how simple model architectures and data help with inductive tasks, providing a solid foundation for future research.
Related papers
- Unifying Deductive and Abductive Reasoning in Knowledge Graphs with Masked Diffusion Model [64.31242163019242]
Deductive and abductive reasoning are critical paradigms for analyzing knowledge graphs.<n>We propose a unified framework for Deductive and Abductive Reasoning in Knowledge graphs, called DARK.<n>We show that DARK achieves state-of-the-art performance on both deductive and abductive reasoning tasks.
arXiv Detail & Related papers (2025-10-13T14:34:57Z) - Thinking in Many Modes: How Composite Reasoning Elevates Large Language Model Performance with Limited Data [1.7194419006128259]
Composite Reasoning (CR) is a novel reasoning approach empowering Large Language Models (LLMs) to explore and combine multiple reasoning styles.<n> evaluated on scientific and medical question-answering benchmarks.<n>Our findings highlight that by cultivating internal reasoning style diversity, LLMs acquire more robust, adaptive, and efficient problem-solving abilities.
arXiv Detail & Related papers (2025-09-26T11:38:03Z) - Language Models Do Not Follow Occam's Razor: A Benchmark for Inductive and Abductive Reasoning [6.06071622429429]
This work focuses on evaluating large language models' inductive and abductive reasoning capabilities.<n>We introduce a programmable and synthetic dataset, InAbHyD, where each reasoning example consists of an incomplete world model and a set of observations.<n>We propose a new metric to evaluate the quality of hypotheses based on Occam's Razor.
arXiv Detail & Related papers (2025-09-03T14:22:42Z) - JustLogic: A Comprehensive Benchmark for Evaluating Deductive Reasoning in Large Language Models [51.99046112135311]
We introduce JustLogic, a synthetically generated deductive reasoning benchmark for rigorous evaluation of Large Language Models (LLMs)<n>JustLogic is highly complex, capable of generating a diverse range of linguistic patterns, vocabulary, and argument structures.<n>Our experimental results reveal that (i) state-of-the-art (SOTA) reasoning LLMs perform on par or better than the human average but significantly worse than the human ceiling.
arXiv Detail & Related papers (2025-01-24T15:49:10Z) - MIRAGE: Evaluating and Explaining Inductive Reasoning Process in Language Models [19.81485079689837]
We evaluate large language models' capabilities in inductive and deductive stages.<n>We find that the models tend to consistently conduct correct deduction without correct inductive rules.<n>In the inductive reasoning process, the model tends to focus on observed facts that are close to the current test example in feature space.
arXiv Detail & Related papers (2024-10-12T14:12:36Z) - Comparing Inferential Strategies of Humans and Large Language Models in Deductive Reasoning [25.732397636695882]
We show that large language models (LLMs) display reasoning patterns akin to those observed in humans.
Our research demonstrates that the architecture and scale of the model significantly affect its preferred method of reasoning.
arXiv Detail & Related papers (2024-02-20T12:58:14Z) - Contrastive Learning for Inference in Dialogue [56.20733835058695]
Inference, especially those derived from inductive processes, is a crucial component in our conversation.
Recent large language models show remarkable advances in inference tasks.
But their performance in inductive reasoning, where not all information is present in the context, is far behind deductive reasoning.
arXiv Detail & Related papers (2023-10-19T04:49:36Z) - Phenomenal Yet Puzzling: Testing Inductive Reasoning Capabilities of Language Models with Hypothesis Refinement [92.61557711360652]
Language models (LMs) often fall short on inductive reasoning, despite achieving impressive success on research benchmarks.
We conduct a systematic study of the inductive reasoning capabilities of LMs through iterative hypothesis refinement.
We reveal several discrepancies between the inductive reasoning processes of LMs and humans, shedding light on both the potentials and limitations of using LMs in inductive reasoning tasks.
arXiv Detail & Related papers (2023-10-12T17:51:10Z) - Towards LogiGLUE: A Brief Survey and A Benchmark for Analyzing Logical Reasoning Capabilities of Language Models [56.34029644009297]
Large language models (LLMs) have demonstrated the ability to overcome various limitations of formal Knowledge Representation (KR) systems.
LLMs excel most in abductive reasoning, followed by deductive reasoning, while they are least effective at inductive reasoning.
We study single-task training, multi-task training, and "chain-of-thought" knowledge distillation fine-tuning technique to assess the performance of model.
arXiv Detail & Related papers (2023-10-02T01:00:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.