Explainable Rule Application via Structured Prompting: A Neural-Symbolic Approach
- URL: http://arxiv.org/abs/2506.16335v1
- Date: Thu, 19 Jun 2025 14:14:01 GMT
- Title: Explainable Rule Application via Structured Prompting: A Neural-Symbolic Approach
- Authors: Albert Sadowski, Jarosław A. Chudziak,
- Abstract summary: Large Language Models (LLMs) excel in complex reasoning tasks but struggle with consistent rule application, exception handling, and explainability.<n>This paper introduces a structured prompting framework that decomposes reasoning into three verifiable steps: entity identification, property extraction, and symbolic rule application.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large Language Models (LLMs) excel in complex reasoning tasks but struggle with consistent rule application, exception handling, and explainability, particularly in domains like legal analysis that require both natural language understanding and precise logical inference. This paper introduces a structured prompting framework that decomposes reasoning into three verifiable steps: entity identification, property extraction, and symbolic rule application. By integrating neural and symbolic approaches, our method leverages LLMs' interpretive flexibility while ensuring logical consistency through formal verification. The framework externalizes task definitions, enabling domain experts to refine logical structures without altering the architecture. Evaluated on the LegalBench hearsay determination task, our approach significantly outperformed baselines, with OpenAI o-family models showing substantial improvements - o1 achieving an F1 score of 0.929 and o3-mini reaching 0.867 using structured decomposition with complementary predicates, compared to their few-shot baselines of 0.714 and 0.74 respectively. This hybrid neural-symbolic system offers a promising pathway for transparent and consistent rule-based reasoning, suggesting potential for explainable AI applications in structured legal reasoning tasks.
Related papers
- Speaking in Words, Thinking in Logic: A Dual-Process Framework in QA Systems [0.602276990341246]
Text-JEPA is a framework for converting natural language into first-order logic (NL2FOL)<n>We show that Text-JEPA achieves competitive performance with significantly lower computational overhead compared to larger LLM-based systems.<n>Our findings highlight the potential of structured, interpretable reasoning frameworks for building efficient and explainable QA systems in specialized domains.
arXiv Detail & Related papers (2025-07-28T03:00:35Z) - From Language to Logic: A Bi-Level Framework for Structured Reasoning [6.075080928704587]
Structured reasoning over natural language inputs remains a core challenge in artificial intelligence.<n>We propose a novel framework that maps language to logic through a two-stage process: high-level task abstraction and low-level logic generation.<n>Our approach significantly outperforms existing baselines in accuracy, with accuracy gains reaching as high as 40%.
arXiv Detail & Related papers (2025-07-11T11:24:09Z) - CLATTER: Comprehensive Entailment Reasoning for Hallucination Detection [60.98964268961243]
We propose that guiding models to perform a systematic and comprehensive reasoning process allows models to execute much finer-grained and accurate entailment decisions.<n>We define a 3-step reasoning process, consisting of (i) claim decomposition, (ii) sub-claim attribution and entailment classification, and (iii) aggregated classification, showing that such guided reasoning indeed yields improved hallucination detection.
arXiv Detail & Related papers (2025-06-05T17:02:52Z) - Learning to Reason via Mixture-of-Thought for Logical Reasoning [56.24256916896427]
Mixture-of-Thought (MoT) is a framework that enables LLMs to reason across three complementary modalities: natural language, code, and truth-table.<n>MoT adopts a two-phase design: (1) self-evolving MoT training, which jointly learns from filtered, self-generated rationales across modalities; and (2) MoT inference, which fully leverages the synergy of three modalities to produce better predictions.
arXiv Detail & Related papers (2025-05-21T17:59:54Z) - An Explicit Syllogistic Legal Reasoning Framework for Large Language Models [5.501226256903341]
Large language models (LLMs) can answer legal questions, but often struggle with explicit syllogistic reasoning.<n>We introduce SyLeR, a novel framework designed to enable LLMs to perform explicit syllogistic legal reasoning.<n>SyLeR employs a tree-structured hierarchical retrieval mechanism to synthesize relevant legal statutes and precedents.
arXiv Detail & Related papers (2025-04-05T03:34:51Z) - LogiDynamics: Unraveling the Dynamics of Logical Inference in Large Language Model Reasoning [49.58786377307728]
This paper adopts an exploratory approach by introducing a controlled evaluation environment for analogical reasoning.<n>We analyze the comparative dynamics of inductive, abductive, and deductive inference pipelines.<n>We investigate advanced paradigms such as hypothesis selection, verification, and refinement, revealing their potential to scale up logical inference.
arXiv Detail & Related papers (2025-02-16T15:54:53Z) - Proof of Thought : Neurosymbolic Program Synthesis allows Robust and Interpretable Reasoning [1.3003982724617653]
Large Language Models (LLMs) have revolutionized natural language processing, yet they struggle with inconsistent reasoning.
This research introduces Proof of Thought, a framework that enhances the reliability and transparency of LLM outputs.
Key contributions include a robust type system with sort management for enhanced logical integrity, explicit representation of rules for clear distinction between factual and inferential knowledge.
arXiv Detail & Related papers (2024-09-25T18:35:45Z) - Logic Agent: Enhancing Validity with Logic Rule Invocation [24.815341366820753]
Chain-of-Thought prompting has emerged as a pivotal technique for augmenting the inferential capabilities of language models during reasoning tasks.<n>This paper introduces the Logic Agent (LA), an agent-based framework aimed at enhancing the validity of reasoning processes in Large Language Models.
arXiv Detail & Related papers (2024-04-28T10:02:28Z) - An Encoding of Abstract Dialectical Frameworks into Higher-Order Logic [57.24311218570012]
This approach allows for the computer-assisted analysis of abstract dialectical frameworks.
Exemplary applications include the formal analysis and verification of meta-theoretical properties.
arXiv Detail & Related papers (2023-12-08T09:32:26Z) - Modeling Hierarchical Reasoning Chains by Linking Discourse Units and
Key Phrases for Reading Comprehension [80.99865844249106]
We propose a holistic graph network (HGN) which deals with context at both discourse level and word level, as the basis for logical reasoning.
Specifically, node-level and type-level relations, which can be interpreted as bridges in the reasoning process, are modeled by a hierarchical interaction mechanism.
arXiv Detail & Related papers (2023-06-21T07:34:27Z) - A Formalisation of Abstract Argumentation in Higher-Order Logic [77.34726150561087]
We present an approach for representing abstract argumentation frameworks based on an encoding into classical higher-order logic.
This provides a uniform framework for computer-assisted assessment of abstract argumentation frameworks using interactive and automated reasoning tools.
arXiv Detail & Related papers (2021-10-18T10:45:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.