Fixed Point Semantics for Stream Reasoning
- URL: http://arxiv.org/abs/2005.08384v1
- Date: Sun, 17 May 2020 22:25:24 GMT
- Title: Fixed Point Semantics for Stream Reasoning
- Authors: Christian Anti\'c
- Abstract summary: Stream reasoning has emerged as a research area within the AI-community with many potential applications.
The rule-based formalism em LARS for non-monotonic stream reasoning under the answer set semantics has been introduced.
We show that our semantics is sound and constructive in the sense that answer sets are derivable bottom-up and free of circular justifications.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reasoning over streams of input data is an essential part of human
intelligence. During the last decade {\em stream reasoning} has emerged as a
research area within the AI-community with many potential applications. In
fact, the increased availability of streaming data via services like Google and
Facebook has raised the need for reasoning engines coping with data that
changes at high rate. Recently, the rule-based formalism {\em LARS} for
non-monotonic stream reasoning under the answer set semantics has been
introduced. Syntactically, LARS programs are logic programs with negation
incorporating operators for temporal reasoning, most notably {\em window
operators} for selecting relevant time points. Unfortunately, by preselecting
{\em fixed} intervals for the semantic evaluation of programs, the rigid
semantics of LARS programs is not flexible enough to {\em constructively} cope
with rapidly changing data dependencies. Moreover, we show that defining the
answer set semantics of LARS in terms of FLP reducts leads to undesirable
circular justifications similar to other ASP extensions. This paper fixes all
of the aforementioned shortcomings of LARS. More precisely, we contribute to
the foundations of stream reasoning by providing an operational fixed point
semantics for a fully flexible variant of LARS and we show that our semantics
is sound and constructive in the sense that answer sets are derivable bottom-up
and free of circular justifications.
Related papers
- FiDeLiS: Faithful Reasoning in Large Language Model for Knowledge Graph Question Answering [46.41364317172677]
We propose a retrieval-exploration interactive method, FiDelis, to handle intermediate steps of reasoning grounded by external knowledge graphs.
We incorporate the logic and common-sense reasoning of LLMs into the knowledge retrieval process, which provides more accurate recalling performance.
arXiv Detail & Related papers (2024-05-22T17:56:53Z) - Aggregation of Reasoning: A Hierarchical Framework for Enhancing Answer Selection in Large Language Models [84.15513004135576]
Current research enhances the reasoning performance of Large Language Models (LLMs) by sampling multiple reasoning chains and ensembling based on the answer frequency.
This approach fails in scenarios where the correct answers are in the minority.
We introduce a hierarchical reasoning aggregation framework AoR, which selects answers based on the evaluation of reasoning chains.
arXiv Detail & Related papers (2024-05-21T17:12:19Z) - DetermLR: Augmenting LLM-based Logical Reasoning from Indeterminacy to Determinacy [76.58614128865652]
We propose DetermLR, a novel perspective that rethinks the reasoning process as an evolution from indeterminacy to determinacy.
First, we categorize known conditions into two types: determinate and indeterminate premises This provides an oveall direction for the reasoning process and guides LLMs in converting indeterminate data into progressively determinate insights.
We automate the storage and extraction of available premises and reasoning paths with reasoning memory, preserving historical reasoning details for subsequent reasoning steps.
arXiv Detail & Related papers (2023-10-28T10:05:51Z) - Visual Chain of Thought: Bridging Logical Gaps with Multimodal
Infillings [61.04460792203266]
We introduce VCoT, a novel method that leverages chain-of-thought prompting with vision-language grounding to bridge the logical gaps within sequential data.
Our method uses visual guidance to generate synthetic multimodal infillings that add consistent and novel information to reduce the logical gaps for downstream tasks.
arXiv Detail & Related papers (2023-05-03T17:58:29Z) - REFINER: Reasoning Feedback on Intermediate Representations [47.36251998678097]
We introduce REFINER, a framework for finetuning language models to generate intermediate inferences.
REFINER works by interacting with a critic model that provides automated feedback on the reasoning.
Empirical evaluations show significant improvements over baseline LMs of comparable scale.
arXiv Detail & Related papers (2023-04-04T15:57:28Z) - LAMBADA: Backward Chaining for Automated Reasoning in Natural Language [11.096348678079574]
Backward Chaining algorithm, called LAMBADA, decomposes reasoning into four sub-modules.
We show that LAMBADA achieves sizable accuracy boosts over state-of-the-art forward reasoning methods.
arXiv Detail & Related papers (2022-12-20T18:06:03Z) - APOLLO: A Simple Approach for Adaptive Pretraining of Language Models
for Logical Reasoning [73.3035118224719]
We propose APOLLO, an adaptively pretrained language model that has improved logical reasoning abilities.
APOLLO performs comparably on ReClor and outperforms baselines on LogiQA.
arXiv Detail & Related papers (2022-12-19T07:40:02Z) - Guiding the PLMs with Semantic Anchors as Intermediate Supervision:
Towards Interpretable Semantic Parsing [57.11806632758607]
We propose to incorporate the current pretrained language models with a hierarchical decoder network.
By taking the first-principle structures as the semantic anchors, we propose two novel intermediate supervision tasks.
We conduct intensive experiments on several semantic parsing benchmarks and demonstrate that our approach can consistently outperform the baselines.
arXiv Detail & Related papers (2022-10-04T07:27:29Z) - Chasing Streams with Existential Rules [18.660026838228625]
We study reasoning with existential rules to perform query answering over streams of data.
We extend LARS, a framework for rule-based stream reasoning, to support existential rules.
We show how to translate LARS with existentials into a semantics-preserving set of existential rules.
arXiv Detail & Related papers (2022-05-04T17:53:17Z) - Faster than LASER -- Towards Stream Reasoning with Deep Neural Networks [0.6649973446180738]
Stream Reasoners aim at bridging this gap between reasoning and stream processing.
LASER is a stream reasoner designed to analyse and perform complex reasoning over streams of data.
We study whether Convolutional and Recurrent Neural Networks, which have shown to be particularly well-suited for time series forecasting and classification, can be trained to approximate reasoning with LASER.
arXiv Detail & Related papers (2021-06-15T22:06:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.