Abductive Computational Systems: Creative Abduction and Future Directions
- URL: http://arxiv.org/abs/2507.08264v1
- Date: Fri, 11 Jul 2025 02:21:41 GMT
- Title: Abductive Computational Systems: Creative Abduction and Future Directions
- Authors: Abhinav Sood, Kazjon Grace, Stephen Wan, Cecile Paris,
- Abstract summary: Abductive reasoning is often mentioned in scientific, design-related and artistic contexts.<n>This paper reviews how abductive reasoning is discussed in theoretical science and design, and then analyses how various computational systems use abductive reasoning.
- Score: 4.315465190486744
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Abductive reasoning, reasoning for inferring explanations for observations, is often mentioned in scientific, design-related and artistic contexts, but its understanding varies across these domains. This paper reviews how abductive reasoning is discussed in epistemology, science and design, and then analyses how various computational systems use abductive reasoning. Our analysis shows that neither theoretical accounts nor computational implementations of abductive reasoning adequately address generating creative hypotheses. Theoretical frameworks do not provide a straightforward model for generating creative abductive hypotheses, computational systems largely implement syllogistic forms of abductive reasoning. We break down abductive computational systems into components and conclude by identifying specific directions for future research that could advance the state of creative abductive reasoning in computational systems.
Related papers
- Conceptual and Unbiased Reasoning in Language Models [98.90677711523645]
We propose a novel conceptualization framework that forces models to perform conceptual reasoning on abstract questions.
We show that existing large language models fall short on conceptual reasoning, dropping 9% to 28% on various benchmarks.
We then discuss how models can improve since high-level abstract reasoning is key to unbiased and generalizable decision-making.
arXiv Detail & Related papers (2024-03-30T00:53:53Z) - Advancing Abductive Reasoning in Knowledge Graphs through Complex Logical Hypothesis Generation [43.26412690886471]
This paper introduces the task of complex logical hypothesis generation, as an initial step towards abductive logical reasoning with Knowledge Graph.
We find that the supervised trained generative model can generate logical hypotheses that are structurally closer to the reference hypothesis.
We introduce the Reinforcement Learning from Knowledge Graph (RLF-KG) method, which minimizes differences between observations and conclusions drawn from generated hypotheses according to the KG.
arXiv Detail & Related papers (2023-12-25T08:06:20Z) - Phenomenal Yet Puzzling: Testing Inductive Reasoning Capabilities of Language Models with Hypothesis Refinement [92.61557711360652]
Language models (LMs) often fall short on inductive reasoning, despite achieving impressive success on research benchmarks.
We conduct a systematic study of the inductive reasoning capabilities of LMs through iterative hypothesis refinement.
We reveal several discrepancies between the inductive reasoning processes of LMs and humans, shedding light on both the potentials and limitations of using LMs in inductive reasoning tasks.
arXiv Detail & Related papers (2023-10-12T17:51:10Z) - Towards LogiGLUE: A Brief Survey and A Benchmark for Analyzing Logical Reasoning Capabilities of Language Models [56.34029644009297]
Large language models (LLMs) have demonstrated the ability to overcome various limitations of formal Knowledge Representation (KR) systems.
LLMs excel most in abductive reasoning, followed by deductive reasoning, while they are least effective at inductive reasoning.
We study single-task training, multi-task training, and "chain-of-thought" knowledge distillation fine-tuning technique to assess the performance of model.
arXiv Detail & Related papers (2023-10-02T01:00:50Z) - MetaLogic: Logical Reasoning Explanations with Fine-Grained Structure [129.8481568648651]
We propose a benchmark to investigate models' logical reasoning capabilities in complex real-life scenarios.
Based on the multi-hop chain of reasoning, the explanation form includes three main components.
We evaluate the current best models' performance on this new explanation form.
arXiv Detail & Related papers (2022-10-22T16:01:13Z) - Non-Axiomatic Term Logic: A Computational Theory of Cognitive Symbolic
Reasoning [3.344997561878685]
Non-Axiomatic Term Logic (NATL) is a theoretical computational framework of humanlike symbolic reasoning in artificial intelligence.
NATL unites a discrete syntactic system inspired from Aristotle's term logic and a continuous semantic system based on the modern idea of distributed representations.
arXiv Detail & Related papers (2022-10-12T15:31:35Z) - NELLIE: A Neuro-Symbolic Inference Engine for Grounded, Compositional, and Explainable Reasoning [59.16962123636579]
This paper proposes a new take on Prolog-based inference engines.
We replace handcrafted rules with a combination of neural language modeling, guided generation, and semi dense retrieval.
Our implementation, NELLIE, is the first system to demonstrate fully interpretable, end-to-end grounded QA.
arXiv Detail & Related papers (2022-09-16T00:54:44Z) - Joint Abductive and Inductive Neural Logical Reasoning [44.36651614420507]
We formulate the problem of the joint abductive and inductive neural logical reasoning (AI-NLR)
First, we incorporate description logic-based ontological axioms to provide the source of concepts.
Then, we represent concepts and queries as fuzzy sets, i.e., sets whose elements have degrees of membership, to bridge concepts and queries with entities.
arXiv Detail & Related papers (2022-05-29T07:41:50Z) - Visual Abductive Reasoning [85.17040703205608]
Abductive reasoning seeks the likeliest possible explanation for partial observations.
We propose a new task and dataset, Visual Abductive Reasoning ( VAR), for examining abductive reasoning ability of machine intelligence in everyday visual situations.
arXiv Detail & Related papers (2022-03-26T10:17:03Z) - On Semantic Cognition, Inductive Generalization, and Language Models [0.2538209532048867]
My research focuses on understanding semantic knowledge in neural network models trained solely to predict natural language (referred to as language models, or LMs)
I propose a framework inspired by 'inductive reasoning,' a phenomenon that sheds light on how humans utilize background knowledge to make inductive leaps and generalize from new pieces of information about concepts and their properties.
arXiv Detail & Related papers (2021-11-04T03:19:52Z) - Explaining AI as an Exploratory Process: The Peircean Abduction Model [0.2676349883103404]
Abductive inference has been defined in many ways.
Challenge of implementing abductive reasoning and the challenge of automating the explanation process are closely linked.
This analysis provides a theoretical framework for understanding what the XAI researchers are already doing.
arXiv Detail & Related papers (2020-09-30T17:10:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.