Fuzzy Datalog$^\exists$ over Arbitrary t-Norms
- URL: http://arxiv.org/abs/2403.02933v1
- Date: Tue, 5 Mar 2024 12:51:40 GMT
- Title: Fuzzy Datalog$^\exists$ over Arbitrary t-Norms
- Authors: Matthias Lanzinger, Stefano Sferrazza, Przemys{\l}aw A. Wa{\l}\k{e}ga,
Georg Gottlob
- Abstract summary: One of the main challenges in the area of Neuro-Symbolic AI is to perform logical reasoning in the presence of both neural and symbolic data.
This requires combining heterogeneous data sources such as knowledge graphs, neural model predictions, structured databases, crowd-sourced data, and many more.
We generalise the standard rule-based language Datalog with existential rules to the setting, by allowing for arbitrary t-norms in the place of classical conjunctions in rule bodies.
The resulting formalism allows us to perform reasoning about associated data with degrees of uncertainty while preserving computational complexity results and the applicability of reasoning techniques established for
- Score: 5.464669506214195
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: One of the main challenges in the area of Neuro-Symbolic AI is to perform
logical reasoning in the presence of both neural and symbolic data. This
requires combining heterogeneous data sources such as knowledge graphs, neural
model predictions, structured databases, crowd-sourced data, and many more. To
allow for such reasoning, we generalise the standard rule-based language
Datalog with existential rules (commonly referred to as tuple-generating
dependencies) to the fuzzy setting, by allowing for arbitrary t-norms in the
place of classical conjunctions in rule bodies. The resulting formalism allows
us to perform reasoning about data associated with degrees of uncertainty while
preserving computational complexity results and the applicability of reasoning
techniques established for the standard Datalog setting. In particular, we
provide fuzzy extensions of Datalog chases which produce fuzzy universal models
and we exploit them to show that in important fragments of the language,
reasoning has the same complexity as in the classical setting.
Related papers
- Efficiently Learning Probabilistic Logical Models by Cheaply Ranking Mined Rules [9.303501974597548]
We introduce precision and recall for logical rules and define their composition as rule utility.
We introduce SPECTRUM, a scalable framework for learning logical models from relational data.
arXiv Detail & Related papers (2024-09-24T16:54:12Z) - LOGICSEG: Parsing Visual Semantics with Neural Logic Learning and
Reasoning [73.98142349171552]
LOGICSEG is a holistic visual semantic that integrates neural inductive learning and logic reasoning with both rich data and symbolic knowledge.
During fuzzy logic-based continuous relaxation, logical formulae are grounded onto data and neural computational graphs, hence enabling logic-induced network training.
These designs together make LOGICSEG a general and compact neural-logic machine that is readily integrated into existing segmentation models.
arXiv Detail & Related papers (2023-09-24T05:43:19Z) - Neuro-Symbolic Recommendation Model based on Logic Query [16.809190067920387]
We propose a neuro-symbolic recommendation model, which transforms the user history interactions into a logic expression.
The logic expressions are then computed based on the modular logic operations of the neural network.
Experiments on three well-known datasets verified that our method performs better compared to state of the art shallow, deep, session, and reasoning models.
arXiv Detail & Related papers (2023-09-14T10:54:48Z) - RandomSCM: interpretable ensembles of sparse classifiers tailored for
omics data [59.4141628321618]
We propose an ensemble learning algorithm based on conjunctions or disjunctions of decision rules.
The interpretability of the models makes them useful for biomarker discovery and patterns discovery in high dimensional data.
arXiv Detail & Related papers (2022-08-11T13:55:04Z) - Amortized Inference for Causal Structure Learning [72.84105256353801]
Learning causal structure poses a search problem that typically involves evaluating structures using a score or independence test.
We train a variational inference model to predict the causal structure from observational/interventional data.
Our models exhibit robust generalization capabilities under substantial distribution shift.
arXiv Detail & Related papers (2022-05-25T17:37:08Z) - Neuro-Symbolic Inductive Logic Programming with Logical Neural Networks [65.23508422635862]
We propose learning rules with the recently proposed logical neural networks (LNN)
Compared to others, LNNs offer strong connection to classical Boolean logic.
Our experiments on standard benchmarking tasks confirm that LNN rules are highly interpretable.
arXiv Detail & Related papers (2021-12-06T19:38:30Z) - Structural Learning of Probabilistic Sentential Decision Diagrams under
Partial Closed-World Assumption [127.439030701253]
Probabilistic sentential decision diagrams are a class of structured-decomposable circuits.
We propose a new scheme based on a partial closed-world assumption: data implicitly provide the logical base of the circuit.
Preliminary experiments show that the proposed approach might properly fit training data, and generalize well to test data, provided that these remain consistent with the underlying logical base.
arXiv Detail & Related papers (2021-07-26T12:01:56Z) - Mining Feature Relationships in Data [0.0]
Feature relationship mining (FRM) uses a genetic programming approach to automatically discover symbolic relationships between continuous or categorical features in data.
Our proposed approach is the first such symbolic approach with the goal of explicitly discovering relationships between features.
Empirical testing on a variety of real-world datasets shows the proposed method is able to find high-quality, simple feature relationships.
arXiv Detail & Related papers (2021-02-02T07:06:16Z) - NSL: Hybrid Interpretable Learning From Noisy Raw Data [66.15862011405882]
This paper introduces a hybrid neural-symbolic learning framework, called NSL, that learns interpretable rules from labelled unstructured data.
NSL combines pre-trained neural networks for feature extraction with FastLAS, a state-of-the-art ILP system for rule learning under the answer set semantics.
We demonstrate that NSL is able to learn robust rules from MNIST data and achieve comparable or superior accuracy when compared to neural network and random forest baselines.
arXiv Detail & Related papers (2020-12-09T13:02:44Z) - Neural Collaborative Reasoning [31.03627817834551]
We propose Collaborative Filtering (CF) to Collaborative Reasoning (CR)
CR means that each user knows part of the reasoning space, and they collaborate for reasoning in the space to estimate preferences for each other.
We integrate the power of representation learning and logical reasoning, where representations capture similarity patterns in data from perceptual perspectives.
arXiv Detail & Related papers (2020-05-16T23:29:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.