Fixed Point Semantics for Stream Reasoning
- URL: http://arxiv.org/abs/2005.08384v1
- Date: Sun, 17 May 2020 22:25:24 GMT
- Title: Fixed Point Semantics for Stream Reasoning
- Authors: Christian Anti\'c
- Abstract summary: Stream reasoning has emerged as a research area within the AI-community with many potential applications.
The rule-based formalism em LARS for non-monotonic stream reasoning under the answer set semantics has been introduced.
We show that our semantics is sound and constructive in the sense that answer sets are derivable bottom-up and free of circular justifications.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reasoning over streams of input data is an essential part of human
intelligence. During the last decade {\em stream reasoning} has emerged as a
research area within the AI-community with many potential applications. In
fact, the increased availability of streaming data via services like Google and
Facebook has raised the need for reasoning engines coping with data that
changes at high rate. Recently, the rule-based formalism {\em LARS} for
non-monotonic stream reasoning under the answer set semantics has been
introduced. Syntactically, LARS programs are logic programs with negation
incorporating operators for temporal reasoning, most notably {\em window
operators} for selecting relevant time points. Unfortunately, by preselecting
{\em fixed} intervals for the semantic evaluation of programs, the rigid
semantics of LARS programs is not flexible enough to {\em constructively} cope
with rapidly changing data dependencies. Moreover, we show that defining the
answer set semantics of LARS in terms of FLP reducts leads to undesirable
circular justifications similar to other ASP extensions. This paper fixes all
of the aforementioned shortcomings of LARS. More precisely, we contribute to
the foundations of stream reasoning by providing an operational fixed point
semantics for a fully flexible variant of LARS and we show that our semantics
is sound and constructive in the sense that answer sets are derivable bottom-up
and free of circular justifications.
Related papers
- Causal Prompting for Implicit Sentiment Analysis with Large Language Models [21.39152516811571]
Implicit Sentiment Analysis (ISA) aims to infer sentiment that is implied rather than explicitly stated.<n>Recent prompting-based methods using Large Language Models (LLMs) have shown promise in ISA.<n>We propose CAPITAL, a causal prompting framework that incorporates front-door adjustment into CoT reasoning.
arXiv Detail & Related papers (2025-07-01T03:01:09Z) - Perceptual Decoupling for Scalable Multi-modal Reasoning via Reward-Optimized Captioning [78.17782197231325]
We propose a reasoning-guided reinforcement learning strategy that aligns the extractor's captioning behavior with the reasoning objective.<n> Experiments on multi-modal math and science benchmarks show that the proposed RACRO method achieves state-of-the-art average performance.
arXiv Detail & Related papers (2025-06-05T02:28:07Z) - AlignRAG: An Adaptable Framework for Resolving Misalignments in Retrieval-Aware Reasoning of RAG [61.28113271728859]
Retrieval-augmented generation (RAG) has emerged as a foundational paradigm for knowledge-grounded text generation.
Existing RAG pipelines often fail to ensure that the reasoning trajectories align with the evidential constraints imposed by retrieved content.
We propose AlignRAG, a novel test-time framework that mitigates reasoning misalignment through iterative Critique-Driven Alignment steps.
arXiv Detail & Related papers (2025-04-21T04:56:47Z) - Sketch-of-Thought: Efficient LLM Reasoning with Adaptive Cognitive-Inspired Sketching [60.04718679054704]
Chain-of-Thought prompting elicits step-by-step problem solving, but often at the cost of excessive verbosity in intermediate outputs.<n>We propose Sketch-of-Thought (SoT), a prompting framework that integrates cognitively inspired reasoning paradigms with linguistic constraints.<n>SoT achieves token reductions of up to 78% with minimal accuracy loss across 15 reasoning datasets.
arXiv Detail & Related papers (2025-03-07T06:57:17Z) - Reasoning-as-Logic-Units: Scaling Test-Time Reasoning in Large Language Models Through Logic Unit Alignment [21.12989936864145]
Chain-of-Thought (CoT) prompting has shown promise in enhancing the reasoning capabilities of large language models (LLMs)
We propose Reasoning-as-Logic-Units (RaLU), which constructs a more reliable reasoning path by aligning logical units between the generated program and their corresponding NL descriptions.
arXiv Detail & Related papers (2025-02-05T08:23:18Z) - Exploring Language Model Generalization in Low-Resource Extractive QA [57.14068405860034]
We investigate Extractive Question Answering (EQA) with Large Language Models (LLMs) under domain drift.
We devise a series of experiments to empirically explain the performance gap.
arXiv Detail & Related papers (2024-09-27T05:06:43Z) - Thought-Path Contrastive Learning via Premise-Oriented Data Augmentation for Logical Reading Comprehension [9.67774998354062]
Previous research has primarily focused on enhancing logical reasoning capabilities through Chain-of-Thought (CoT) or data augmentation.
We propose a Premise-Oriented Data Augmentation (PODA) framework to generate CoT rationales including analyses for both correct and incorrect options.
We also introduce a novel thought-path contrastive learning method that compares reasoning paths between the original and counterfactual samples.
arXiv Detail & Related papers (2024-09-22T15:44:43Z) - Aggregation of Reasoning: A Hierarchical Framework for Enhancing Answer Selection in Large Language Models [84.15513004135576]
Current research enhances the reasoning performance of Large Language Models (LLMs) by sampling multiple reasoning chains and ensembling based on the answer frequency.
This approach fails in scenarios where the correct answers are in the minority.
We introduce a hierarchical reasoning aggregation framework AoR, which selects answers based on the evaluation of reasoning chains.
arXiv Detail & Related papers (2024-05-21T17:12:19Z) - LaRS: Latent Reasoning Skills for Chain-of-Thought Reasoning [61.7853049843921]
Chain-of-thought (CoT) prompting is a popular in-context learning approach for large language models (LLMs)
This paper introduces a new approach named Latent Reasoning Skills (LaRS) that employs unsupervised learning to create a latent space representation of rationales.
arXiv Detail & Related papers (2023-12-07T20:36:10Z) - Neuro-Symbolic Integration Brings Causal and Reliable Reasoning Proofs [95.07757789781213]
Two lines of approaches are adopted for complex reasoning with LLMs.
One line of work prompts LLMs with various reasoning structures, while the structural outputs can be naturally regarded as intermediate reasoning steps.
The other line of work adopt LLM-free declarative solvers to do the reasoning task, rendering higher reasoning accuracy but lacking interpretability due to the black-box nature of the solvers.
We present a simple extension to the latter line of work. Specifically, we showcase that the intermediate search logs generated by Prolog interpreters can be accessed and interpreted into human-readable reasoning.
arXiv Detail & Related papers (2023-11-16T11:26:21Z) - REFINER: Reasoning Feedback on Intermediate Representations [47.36251998678097]
We introduce REFINER, a framework for finetuning language models to generate intermediate inferences.
REFINER works by interacting with a critic model that provides automated feedback on the reasoning.
Empirical evaluations show significant improvements over baseline LMs of comparable scale.
arXiv Detail & Related papers (2023-04-04T15:57:28Z) - APOLLO: A Simple Approach for Adaptive Pretraining of Language Models
for Logical Reasoning [73.3035118224719]
We propose APOLLO, an adaptively pretrained language model that has improved logical reasoning abilities.
APOLLO performs comparably on ReClor and outperforms baselines on LogiQA.
arXiv Detail & Related papers (2022-12-19T07:40:02Z) - Guiding the PLMs with Semantic Anchors as Intermediate Supervision:
Towards Interpretable Semantic Parsing [57.11806632758607]
We propose to incorporate the current pretrained language models with a hierarchical decoder network.
By taking the first-principle structures as the semantic anchors, we propose two novel intermediate supervision tasks.
We conduct intensive experiments on several semantic parsing benchmarks and demonstrate that our approach can consistently outperform the baselines.
arXiv Detail & Related papers (2022-10-04T07:27:29Z) - Chasing Streams with Existential Rules [18.660026838228625]
We study reasoning with existential rules to perform query answering over streams of data.
We extend LARS, a framework for rule-based stream reasoning, to support existential rules.
We show how to translate LARS with existentials into a semantics-preserving set of existential rules.
arXiv Detail & Related papers (2022-05-04T17:53:17Z) - Faster than LASER -- Towards Stream Reasoning with Deep Neural Networks [0.6649973446180738]
Stream Reasoners aim at bridging this gap between reasoning and stream processing.
LASER is a stream reasoner designed to analyse and perform complex reasoning over streams of data.
We study whether Convolutional and Recurrent Neural Networks, which have shown to be particularly well-suited for time series forecasting and classification, can be trained to approximate reasoning with LASER.
arXiv Detail & Related papers (2021-06-15T22:06:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.