Neuro-Symbolic Inductive Logic Programming with Logical Neural Networks
- URL: http://arxiv.org/abs/2112.03324v1
- Date: Mon, 6 Dec 2021 19:38:30 GMT
- Title: Neuro-Symbolic Inductive Logic Programming with Logical Neural Networks
- Authors: Prithviraj Sen, Breno W. S. R. de Carvalho, Ryan Riegel, Alexander
Gray
- Abstract summary: We propose learning rules with the recently proposed logical neural networks (LNN)
Compared to others, LNNs offer strong connection to classical Boolean logic.
Our experiments on standard benchmarking tasks confirm that LNN rules are highly interpretable.
- Score: 65.23508422635862
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent work on neuro-symbolic inductive logic programming has led to
promising approaches that can learn explanatory rules from noisy, real-world
data. While some proposals approximate logical operators with differentiable
operators from fuzzy or real-valued logic that are parameter-free thus
diminishing their capacity to fit the data, other approaches are only loosely
based on logic making it difficult to interpret the learned "rules". In this
paper, we propose learning rules with the recently proposed logical neural
networks (LNN). Compared to others, LNNs offer strong connection to classical
Boolean logic thus allowing for precise interpretation of learned rules while
harboring parameters that can be trained with gradient-based optimization to
effectively fit the data. We extend LNNs to induce rules in first-order logic.
Our experiments on standard benchmarking tasks confirm that LNN rules are
highly interpretable and can achieve comparable or higher accuracy due to their
flexible parameterization.
Related papers
- Learning Interpretable Differentiable Logic Networks [3.8064485653035987]
We introduce a novel method for learning interpretable differentiable logic networks (DLNs)
We train these networks by softening and differentiating their discrete components, through binarization of inputs, binary logic operations, and connections between neurons.
Experimental results on twenty classification tasks indicate that differentiable logic networks can achieve accuracies comparable to or exceeding that of traditional NNs.
arXiv Detail & Related papers (2024-07-04T21:58:26Z) - Reinforcement Learning with External Knowledge by using Logical Neural
Networks [67.46162586940905]
A recent neuro-symbolic framework called the Logical Neural Networks (LNNs) can simultaneously provide key-properties of both neural networks and symbolic logic.
We propose an integrated method that enables model-free reinforcement learning from external knowledge sources.
arXiv Detail & Related papers (2021-03-03T12:34:59Z) - RNNLogic: Learning Logic Rules for Reasoning on Knowledge Graphs [91.71504177786792]
This paper studies learning logic rules for reasoning on knowledge graphs.
Logic rules provide interpretable explanations when used for prediction as well as being able to generalize to other tasks.
Existing methods either suffer from the problem of searching in a large search space or ineffective optimization due to sparse rewards.
arXiv Detail & Related papers (2020-10-08T14:47:02Z) - Neural Logic Reasoning [47.622957656745356]
We propose Logic-Integrated Neural Network (LINN) to integrate the power of deep learning and logic reasoning.
LINN learns basic logical operations such as AND, OR, NOT as neural modules, and conducts propositional logical reasoning through the network for inference.
Experiments show that LINN significantly outperforms state-of-the-art recommendation models in Top-K recommendation.
arXiv Detail & Related papers (2020-08-20T14:53:23Z) - Logical Neural Networks [51.46602187496816]
We propose a novel framework seamlessly providing key properties of both neural nets (learning) and symbolic logic (knowledge and reasoning)
Every neuron has a meaning as a component of a formula in a weighted real-valued logic, yielding a highly intepretable disentangled representation.
Inference is omni rather than focused on predefined target variables, and corresponds to logical reasoning.
arXiv Detail & Related papers (2020-06-23T16:55:45Z) - Evaluating Logical Generalization in Graph Neural Networks [59.70452462833374]
We study the task of logical generalization using graph neural networks (GNNs)
Our benchmark suite, GraphLog, requires that learning algorithms perform rule induction in different synthetic logics.
We find that the ability for models to generalize and adapt is strongly determined by the diversity of the logical rules they encounter during training.
arXiv Detail & Related papers (2020-03-14T05:45:55Z) - Analyzing Differentiable Fuzzy Logic Operators [3.4806267677524896]
We study how a large collection of logical operators from the fuzzy logic literature behave in a differentiable learning setting.
We show that it is possible to use Differentiable Fuzzy Logics for semi-supervised learning, and compare how different operators behave in practice.
arXiv Detail & Related papers (2020-02-14T16:11:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.