iWarded: A System for Benchmarking Datalog+/- Reasoning (technical
report)
- URL: http://arxiv.org/abs/2103.08588v1
- Date: Mon, 15 Mar 2021 17:56:46 GMT
- Title: iWarded: A System for Benchmarking Datalog+/- Reasoning (technical
report)
- Authors: Teodoro Baldazzi (Universit\`a Roma Tre), Luigi Bellomarini (Banca
d'Italia), Emanuel Sallinger (University of Oxford and TU Wien), Paolo Atzeni
(Universit\`a Roma Tre)
- Abstract summary: iWarded is a system that can generate very large, complex, realistic reasoning settings.
We present the iWarded system and a set of novel theoretical results adopted to generate effective scenarios.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent years have seen increasing popularity of logic-based reasoning
systems, with research and industrial interest as well as many flourishing
applications in the area of Knowledge Graphs. Despite that, one can observe a
substantial lack of specific tools able to generate nontrivial reasoning
settings and benchmark scenarios. As a consequence, evaluating, analysing and
comparing reasoning systems is a complex task, especially when they embody
sophisticated optimizations and execution techniques that leverage the
theoretical underpinnings of the adopted logic fragment. In this paper, we aim
at filling this gap by introducing iWarded, a system that can generate very
large, complex, realistic reasoning settings to be used for the benchmarking of
logic-based reasoning systems adopting Datalog+/-, a family of extensions of
Datalog that has seen a resurgence in the last few years. In particular,
iWarded generates reasoning settings for Warded Datalog+/-, a language with a
very good tradeoff between computational complexity and expressive power. In
the paper, we present the iWarded system and a set of novel theoretical results
adopted to generate effective scenarios. As Datalog-based languages are of
general interest and see increasing adoption, we believe that iWarded is a step
forward in the empirical evaluation of current and future systems.
Related papers
- Multi-modal Causal Structure Learning and Root Cause Analysis [67.67578590390907]
We propose Mulan, a unified multi-modal causal structure learning method for root cause localization.
We leverage a log-tailored language model to facilitate log representation learning, converting log sequences into time-series data.
We also introduce a novel key performance indicator-aware attention mechanism for assessing modality reliability and co-learning a final causal graph.
arXiv Detail & Related papers (2024-02-04T05:50:38Z) - LogicAsker: Evaluating and Improving the Logical Reasoning Ability of Large Language Models [63.14196038655506]
We introduce LogicAsker, a novel approach for evaluating and enhancing the logical reasoning capabilities of large language models (LLMs)
Our methodology reveals significant gaps in LLMs' learning of logical rules, with identified reasoning failures ranging from 29% to 90% across different models.
We leverage these findings to construct targeted demonstration examples and fine-tune data, notably enhancing logical reasoning in models like GPT-4o by up to 5%.
arXiv Detail & Related papers (2024-01-01T13:53:53Z) - Ontological Reasoning over Shy and Warded Datalog$+/-$ for
Streaming-based Architectures (technical report) [6.689509223124273]
Datalog-based ontological reasoning systems adopt languages, often shared under the collective name of Datalog$ +/-$.
In this paper, we focus on two extremely promising, expressive, and tractable languages, namely, Shy and Warded Datalog$ +/-$.
We leverage their theoretical underpinnings to introduce novel reasoning techniques, technically, "chase variants", that are particularly fit for efficient reasoning in streaming-based architectures.
We then implement them in Vadalog, our reference streaming-based engine, to efficiently solve ontological reasoning tasks over real-world settings.
arXiv Detail & Related papers (2023-11-20T23:27:43Z) - Neuro-Symbolic Recommendation Model based on Logic Query [16.809190067920387]
We propose a neuro-symbolic recommendation model, which transforms the user history interactions into a logic expression.
The logic expressions are then computed based on the modular logic operations of the neural network.
Experiments on three well-known datasets verified that our method performs better compared to state of the art shallow, deep, session, and reasoning models.
arXiv Detail & Related papers (2023-09-14T10:54:48Z) - Log Parsing Evaluation in the Era of Modern Software Systems [47.370291246632114]
We focus on one integral part of automated log analysis, log parsing, which is the prerequisite to deriving any insights from logs.
Our investigation reveals problematic aspects within the log parsing field, particularly its inefficiency in handling heterogeneous real-world logs.
We propose a tool, Logchimera, that enables estimating log parsing performance in industry contexts.
arXiv Detail & Related papers (2023-08-17T14:19:22Z) - Scallop: A Language for Neurosymbolic Programming [14.148819428748597]
Scallop is a language that combines the benefits of deep learning and logical reasoning.
It is capable of expressing algorithmic reasoning in diverse and challenging AI tasks.
It provides a succinct interface for machine learning programmers to integrate logical domain knowledge.
arXiv Detail & Related papers (2023-04-10T18:46:53Z) - Discourse-Aware Graph Networks for Textual Logical Reasoning [142.0097357999134]
Passage-level logical relations represent entailment or contradiction between propositional units (e.g., a concluding sentence)
We propose logic structural-constraint modeling to solve the logical reasoning QA and introduce discourse-aware graph networks (DAGNs)
The networks first construct logic graphs leveraging in-line discourse connectives and generic logic theories, then learn logic representations by end-to-end evolving the logic relations with an edge-reasoning mechanism and updating the graph features.
arXiv Detail & Related papers (2022-07-04T14:38:49Z) - Complexity of Arithmetic in Warded Datalog+- [1.5469452301122173]
Warded Datalog+- extends the logic-based language Datalog with existential quantifiers in rule heads.
We define a new language that extends Warded Datalog+- with arithmetic and prove its P-completeness.
We present an efficient reasoning algorithm for our newly defined language and prove descriptive complexity for a recently introduced Datalog fragment with integer arithmetic.
arXiv Detail & Related papers (2022-02-10T15:14:03Z) - Logic-Driven Context Extension and Data Augmentation for Logical
Reasoning of Text [65.24325614642223]
We propose to understand logical symbols and expressions in the text to arrive at the answer.
Based on such logical information, we put forward a context extension framework and a data augmentation algorithm.
Our method achieves the state-of-the-art performance, and both logic-driven context extension framework and data augmentation algorithm can help improve the accuracy.
arXiv Detail & Related papers (2021-05-08T10:09:36Z) - Evaluating Logical Generalization in Graph Neural Networks [59.70452462833374]
We study the task of logical generalization using graph neural networks (GNNs)
Our benchmark suite, GraphLog, requires that learning algorithms perform rule induction in different synthetic logics.
We find that the ability for models to generalize and adapt is strongly determined by the diversity of the logical rules they encounter during training.
arXiv Detail & Related papers (2020-03-14T05:45:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.