NeSTR: A Neuro-Symbolic Abductive Framework for Temporal Reasoning in Large Language Models
- URL: http://arxiv.org/abs/2512.07218v1
- Date: Mon, 08 Dec 2025 06:58:23 GMT
- Title: NeSTR: A Neuro-Symbolic Abductive Framework for Temporal Reasoning in Large Language Models
- Authors: Feng Liang, Weixin Zeng, Runhao Zhao, Xiang Zhao,
- Abstract summary: Neuro-Symbolic Temporal Reasoning (NeSTR) is a novel framework that integrates structured symbolic representations with hybrid reflective reasoning.<n>NeSTR preserves explicit temporal relations through symbolic encoding, enforces logical consistency via verification, and corrects flawed inferences using abductive reflection.
- Score: 12.935644609836507
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large Language Models (LLMs) have demonstrated remarkable performance across a wide range of natural language processing tasks. However, temporal reasoning, particularly under complex temporal constraints, remains a major challenge. To this end, existing approaches have explored symbolic methods, which encode temporal structure explicitly, and reflective mechanisms, which revise reasoning errors through multi-step inference. Nonetheless, symbolic approaches often underutilize the reasoning capabilities of LLMs, while reflective methods typically lack structured temporal representations, which can result in inconsistent or hallucinated reasoning. As a result, even when the correct temporal context is available, LLMs may still misinterpret or misapply time-related information, leading to incomplete or inaccurate answers. To address these limitations, in this work, we propose Neuro-Symbolic Temporal Reasoning (NeSTR), a novel framework that integrates structured symbolic representations with hybrid reflective reasoning to enhance the temporal sensitivity of LLM inference. NeSTR preserves explicit temporal relations through symbolic encoding, enforces logical consistency via verification, and corrects flawed inferences using abductive reflection. Extensive experiments on diverse temporal question answering benchmarks demonstrate that NeSTR achieves superior zero-shot performance and consistently improves temporal reasoning without any fine-tuning, showcasing the advantage of neuro-symbolic integration in enhancing temporal understanding in large language models.
Related papers
- Thinking with Drafts: Speculative Temporal Reasoning for Efficient Long Video Understanding [56.7383554589569]
Long video understanding is essential for human-like intelligence, enabling coherent perception and reasoning over extended temporal contexts.<n>We propose SpecTemp, a reinforcement learning-based Speculative Temporal reasoning framework.<n>We show that SpecTemp not only maintains competitive accuracy but also significantly accelerates inference compared with existing thinking-with-frames methods.
arXiv Detail & Related papers (2025-11-30T09:27:59Z) - Priors in Time: Missing Inductive Biases for Language Model Interpretability [58.07412640266836]
We show that Sparse Autoencoders impose priors that assume independence of concepts across time, implying stationarity.<n>We introduce a new interpretability objective -- Temporal Feature Analysis -- which possesses a temporal inductive bias to decompose representations at a given time into two parts.<n>Our results underscore the need for inductive biases that match the data in designing robust interpretability tools.
arXiv Detail & Related papers (2025-11-03T18:43:48Z) - LLM Interpretability with Identifiable Temporal-Instantaneous Representation [18.671694445771113]
We introduce an identifiable temporal causal representation learning framework specifically designed for Large Language Models.<n>Our approach provides theoretical guarantees and demonstrates efficacy on synthetic datasets scaled to match real-world complexity.
arXiv Detail & Related papers (2025-09-27T14:14:41Z) - T-ILR: a Neurosymbolic Integration for LTLf [47.316620315732024]
We propose a neurosymbolic framework to incorporate temporal logic specifications directly into deep learning architectures for sequence-based tasks.<n>We name this proposed method Temporal Iterative Local Refinement (T-ILR)
arXiv Detail & Related papers (2025-08-21T20:24:20Z) - T-CPDL: A Temporal Causal Probabilistic Description Logic for Developing Logic-RAG Agent [5.439020425819001]
Temporal Causal Probabilistic Description Logic (T-CPDL) is an integrated framework that extends Description Logic with temporal interval operators, explicit causal relationships, and probabilistic annotations.<n>T-CPDL substantially improves inference accuracy, interpretability, and confidence calibration of language model outputs.<n>This work also lays the groundwork for developing advanced Logic-Retrieval-Augmented Generation (Logic-RAG) frameworks.
arXiv Detail & Related papers (2025-06-23T12:11:15Z) - Learning to Reason Over Time: Timeline Self-Reflection for Improved Temporal Reasoning in Language Models [21.579319926212296]
Large Language Models (LLMs) have emerged as powerful tools for generating coherent text, understanding context, and performing reasoning tasks.<n>They struggle with temporal reasoning, which requires processing time-related information such as event sequencing, durations, and inter-temporal relationships.<n>We introduce TISER, a novel framework that enhances the temporal reasoning abilities of LLMs through a multi-stage process that combines timeline construction with iterative self-reflection.
arXiv Detail & Related papers (2025-04-07T16:51:45Z) - The Curse of CoT: On the Limitations of Chain-of-Thought in In-Context Learning [56.574829311863446]
Chain-of-Thought (CoT) prompting has been widely recognized for its ability to enhance reasoning capabilities in large language models (LLMs)<n>We demonstrate that CoT and its reasoning variants consistently underperform direct answering across varying model scales and benchmark complexities.<n>Our analysis uncovers a fundamental hybrid mechanism of explicit-implicit reasoning driving CoT's performance in pattern-based ICL.
arXiv Detail & Related papers (2025-04-07T13:51:06Z) - Enhancing Systematic Decompositional Natural Language Inference Using Informal Logic [51.967603572656266]
We introduce a consistent and theoretically grounded approach to annotating decompositional entailment.
We find that our new dataset, RDTE, has a substantially higher internal consistency (+9%) than prior decompositional entailment datasets.
We also find that training an RDTE-oriented entailment classifier via knowledge distillation and employing it in an entailment tree reasoning engine significantly improves both accuracy and proof quality.
arXiv Detail & Related papers (2024-02-22T18:55:17Z) - Unlocking Temporal Question Answering for Large Language Models with Tailor-Made Reasoning Logic [84.59255070520673]
Large language models (LLMs) face a challenge when engaging in temporal reasoning.
We propose TempLogic, a novel framework designed specifically for temporal question-answering tasks.
arXiv Detail & Related papers (2023-05-24T10:57:53Z) - Standpoint Linear Temporal Logic [2.552459629685159]
We present standpoint linear temporal logic (SLTL), a new logic that combines the temporal features of thepective with the multi-perspective modelling capacity of SL.
We define the logic SLTL, its syntax, and its semantics, establish its decidability and terminating complexity, and provide a tableau calculus to automate SLTL reasoning.
arXiv Detail & Related papers (2023-04-27T15:03:38Z) - Supporting Optimal Phase Space Reconstructions Using Neural Network
Architecture for Time Series Modeling [68.8204255655161]
We propose an artificial neural network with a mechanism to implicitly learn the phase spaces properties.
Our approach is either as competitive as or better than most state-of-the-art strategies.
arXiv Detail & Related papers (2020-06-19T21:04:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.