A Formal Comparison between Datalog-based Languages for Stream Reasoning
(extended version)
- URL: http://arxiv.org/abs/2208.12726v1
- Date: Fri, 26 Aug 2022 15:27:21 GMT
- Title: A Formal Comparison between Datalog-based Languages for Stream Reasoning
(extended version)
- Authors: Nicola Leone, Marco Manna, Maria Concetta Morelli, and Simona Perri
- Abstract summary: The paper investigates the relative expressiveness of two logic-based languages for reasoning over streams.
We show that, without any restrictions, the two languages are incomparable and to identify fragments of each language that can be expressed via the other one.
- Score: 4.441335529279506
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The paper investigates the relative expressiveness of two logic-based
languages for reasoning over streams, namely LARS Programs -- the language of
the Logic-based framework for Analytic Reasoning over Streams called LARS --
and LDSR -- the language of the recent extension of the I-DLV system for stream
reasoning called I-DLV-sr. Although these two languages build over Datalog,
they do differ both in syntax and semantics. To reconcile their expressive
capabilities for stream reasoning, we define a comparison framework that allows
us to show that, without any restrictions, the two languages are incomparable
and to identify fragments of each language that can be expressed via the other
one.
Related papers
- Transformer-based Language Models for Reasoning in the Description Logic ALCQ [2.8210912543324658]
We construct the natural language dataset, DELTA$_D$, using the expressive description logic language $mathcalALCQ$.
We investigate the logical reasoning capabilities of a supervised fine-tuned DeBERTa-based model and two large language models.
We show that the DeBERTa-based model fine-tuned on our dataset can master the entailment checking task.
arXiv Detail & Related papers (2024-10-12T18:25:34Z) - Language Models can be Logical Solvers [99.40649402395725]
We introduce LoGiPT, a novel language model that directly emulates the reasoning processes of logical solvers.
LoGiPT is fine-tuned on a newly constructed instruction-tuning dataset derived from revealing and refining the invisible reasoning process of deductive solvers.
arXiv Detail & Related papers (2023-11-10T16:23:50Z) - Modeling Hierarchical Reasoning Chains by Linking Discourse Units and
Key Phrases for Reading Comprehension [80.99865844249106]
We propose a holistic graph network (HGN) which deals with context at both discourse level and word level, as the basis for logical reasoning.
Specifically, node-level and type-level relations, which can be interpreted as bridges in the reasoning process, are modeled by a hierarchical interaction mechanism.
arXiv Detail & Related papers (2023-06-21T07:34:27Z) - Logic of Differentiable Logics: Towards a Uniform Semantics of DL [1.1549572298362787]
Differentiable logics (DLs) have been proposed as a method of training neural networks to satisfy logical specifications.
This paper proposes a meta-language for defining DLs that we call the Logic of Differentiable Logics, or LDL.
We use LDL to establish several theoretical properties of existing DLs, and to conduct their empirical study in neural network verification.
arXiv Detail & Related papers (2023-03-19T13:03:51Z) - MURMUR: Modular Multi-Step Reasoning for Semi-Structured Data-to-Text
Generation [102.20036684996248]
We propose MURMUR, a neuro-symbolic modular approach to text generation from semi-structured data with multi-step reasoning.
We conduct experiments on two data-to-text generation tasks like WebNLG and LogicNLG.
arXiv Detail & Related papers (2022-12-16T17:36:23Z) - LAE: Language-Aware Encoder for Monolingual and Multilingual ASR [87.74794847245536]
A novel language-aware encoder (LAE) architecture is proposed to handle both situations by disentangling language-specific information.
Experiments conducted on Mandarin-English code-switched speech suggest that the proposed LAE is capable of discriminating different languages in frame-level.
arXiv Detail & Related papers (2022-06-05T04:03:12Z) - Multi-level Contrastive Learning for Cross-lingual Spoken Language
Understanding [90.87454350016121]
We develop novel code-switching schemes to generate hard negative examples for contrastive learning at all levels.
We develop a label-aware joint model to leverage label semantics for cross-lingual knowledge transfer.
arXiv Detail & Related papers (2022-05-07T13:44:28Z) - Foundations of Symbolic Languages for Model Interpretability [2.3361634876233817]
We study the computational complexity of FOIL queries over two classes of ML models often deemed to be easily interpretable.
We present a prototype implementation of FOIL wrapped in a high-level declarative language.
arXiv Detail & Related papers (2021-10-05T21:56:52Z) - Logic-Driven Context Extension and Data Augmentation for Logical
Reasoning of Text [65.24325614642223]
We propose to understand logical symbols and expressions in the text to arrive at the answer.
Based on such logical information, we put forward a context extension framework and a data augmentation algorithm.
Our method achieves the state-of-the-art performance, and both logic-driven context extension framework and data augmentation algorithm can help improve the accuracy.
arXiv Detail & Related papers (2021-05-08T10:09:36Z) - CLAR: A Cross-Lingual Argument Regularizer for Semantic Role Labeling [17.756625082528142]
We propose a method called Cross-Lingual Argument Regularizer (CLAR)
CLAR identifies linguistic annotation similarity across languages and exploits this information to map the target language arguments.
Our experimental results show that CLAR consistently improves SRL performance on multiple languages over monolingual and polyglot baselines for low resource languages.
arXiv Detail & Related papers (2020-11-09T20:16:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.