Neurosymbolic Language Reasoning as Satisfiability Modulo Theory
- URL: http://arxiv.org/abs/2602.18095v1
- Date: Fri, 20 Feb 2026 09:35:26 GMT
- Title: Neurosymbolic Language Reasoning as Satisfiability Modulo Theory
- Authors: Hyunseok Oh, Sam Stern, Youngki Lee, Matthai Philipose,
- Abstract summary: We introduce Logitext, a neurosymbolic language that represents documents as natural language text constraints (NLTCs)<n>We develop an algorithm that integrates LLM-based constraint evaluation with satisfiability modulo theory (SMT) solving, enabling joint textual-logical reasoning.
- Score: 17.539850467483323
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Natural language understanding requires interleaving textual and logical reasoning, yet large language models often fail to perform such reasoning reliably. Existing neurosymbolic systems combine LLMs with solvers but remain limited to fully formalizable tasks such as math or program synthesis, leaving natural documents with only partial logical structure unaddressed. We introduce Logitext, a neurosymbolic language that represents documents as natural language text constraints (NLTCs), making partial logical structure explicit. We develop an algorithm that integrates LLM-based constraint evaluation with satisfiability modulo theory (SMT) solving, enabling joint textual-logical reasoning. Experiments on a new content moderation benchmark, together with LegalBench and Super-Natural Instructions, show that Logitext improves both accuracy and coverage. This work is the first that treats LLM-based reasoning as an SMT theory, extending neurosymbolic methods beyond fully formalizable domains.
Related papers
- Last Layer Logits to Logic: Empowering LLMs with Logic-Consistent Structured Knowledge Reasoning [55.55968342644846]
Large Language Models (LLMs) achieve excellent performance in natural language reasoning tasks through pre-training on vast unstructured text.<n>We propose the textitLogits-to-Logic framework, which incorporates logits strengthening and logits filtering as core modules to correct logical defects in LLM outputs.
arXiv Detail & Related papers (2025-11-11T07:08:27Z) - Sound and Complete Neurosymbolic Reasoning with LLM-Grounded Interpretations [7.81820080453498]
Large language models (LLMs) have demonstrated impressive capabilities in natural language understanding and generation.<n>We present a method for directly integrating an LLM into the interpretation function of the formal semantics for a paraconsistent logic.
arXiv Detail & Related papers (2025-07-13T19:05:43Z) - Learning to Reason via Mixture-of-Thought for Logical Reasoning [56.24256916896427]
Mixture-of-Thought (MoT) is a framework that enables LLMs to reason across three complementary modalities: natural language, code, and truth-table.<n>MoT adopts a two-phase design: (1) self-evolving MoT training, which jointly learns from filtered, self-generated rationales across modalities; and (2) MoT inference, which fully leverages the synergy of three modalities to produce better predictions.
arXiv Detail & Related papers (2025-05-21T17:59:54Z) - Improving Chain-of-Thought Reasoning via Quasi-Symbolic Abstractions [25.337811496479265]
Chain-of-Though (CoT) represents a common strategy for reasoning in Large Language Models.<n>We present QuaSAR, a variation of CoT that guides LLMs to operate at a higher level of abstraction via quasi-symbolic explanations.<n>Our experiments show that quasi-symbolic abstractions can improve CoT-based methods by up to 8% accuracy.
arXiv Detail & Related papers (2025-02-18T07:58:48Z) - FSLI: An Interpretable Formal Semantic System for One-Dimensional Ordering Inference [0.9048611509540079]
We develop a system for solving logical deduction one-dimensional ordering problems.<n>It transforms natural language premises and candidate statements into first-order logic.<n>It achieves 100% accuracy on BIGbench's logical deduction task and 88% on a syntactically simplified subset of AR-LSAT.
arXiv Detail & Related papers (2025-02-12T13:58:42Z) - Divide and Translate: Compositional First-Order Logic Translation and Verification for Complex Logical Reasoning [28.111458981621105]
Complex logical reasoning tasks require a long sequence of reasoning, which a large language model (LLM) with chain-of-thought prompting still falls short.<n>We propose a Compositional First-Order Logic Translation to capture logical semantics hidden in the natural language during translation.<n>We evaluate the proposed method, dubbed CLOVER, on seven logical reasoning benchmarks and show that it outperforms the previous neurosymbolic approaches.
arXiv Detail & Related papers (2024-10-10T15:42:39Z) - LogicBench: Towards Systematic Evaluation of Logical Reasoning Ability of Large Language Models [52.03659714625452]
Recently developed large language models (LLMs) have been shown to perform remarkably well on a wide range of language understanding tasks.
But, can they really "reason" over the natural language?
This question has been receiving significant research attention and many reasoning skills such as commonsense, numerical, and qualitative have been studied.
arXiv Detail & Related papers (2024-04-23T21:08:49Z) - Language Models can be Logical Solvers [99.40649402395725]
We introduce LoGiPT, a novel language model that directly emulates the reasoning processes of logical solvers.
LoGiPT is fine-tuned on a newly constructed instruction-tuning dataset derived from revealing and refining the invisible reasoning process of deductive solvers.
arXiv Detail & Related papers (2023-11-10T16:23:50Z) - ChatABL: Abductive Learning via Natural Language Interaction with
ChatGPT [72.83383437501577]
Large language models (LLMs) have recently demonstrated significant potential in mathematical abilities.
LLMs currently have difficulty in bridging perception, language understanding and reasoning capabilities.
This paper presents a novel method for integrating LLMs into the abductive learning framework.
arXiv Detail & Related papers (2023-04-21T16:23:47Z) - Logical Natural Language Generation from Open-Domain Tables [107.04385677577862]
We propose a new task where a model is tasked with generating natural language statements that can be emphlogically entailed by the facts.
To facilitate the study of the proposed logical NLG problem, we use the existing TabFact dataset citechen 2019tabfact featured with a wide range of logical/symbolic inferences.
The new task poses challenges to the existing monotonic generation frameworks due to the mismatch between sequence order and logical order.
arXiv Detail & Related papers (2020-04-22T06:03:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.