NeuralLog: a Neural Logic Language
- URL: http://arxiv.org/abs/2105.01442v1
- Date: Tue, 4 May 2021 12:09:35 GMT
- Title: NeuralLog: a Neural Logic Language
- Authors: Victor Guimar\~aes and V\'itor Santos Costa
- Abstract summary: We propose NeuralLog, a first-order logic language that is compiled to a neural network.
The main goal of NeuralLog is to bridge logic programming and deep learning.
We have shown that NeuralLog can learn link prediction and classification tasks, using the same theory as the compared systems.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Application domains that require considering relationships among objects
which have real-valued attributes are becoming even more important. In this
paper we propose NeuralLog, a first-order logic language that is compiled to a
neural network. The main goal of NeuralLog is to bridge logic programming and
deep learning, allowing advances in both fields to be combined in order to
obtain better machine learning models. The main advantages of NeuralLog are: to
allow neural networks to be defined as logic programs; and to be able to handle
numeric attributes and functions. We compared NeuralLog with two distinct
systems that use first-order logic to build neural networks. We have also shown
that NeuralLog can learn link prediction and classification tasks, using the
same theory as the compared systems, achieving better results for the area
under the ROC curve in four datasets: Cora and UWCSE for link prediction; and
Yelp and PAKDD15 for classification; and comparable results for link prediction
in the WordNet dataset.
Related papers
- Neural Reasoning Networks: Efficient Interpretable Neural Networks With Automatic Textual Explanations [45.974930902038494]
We propose a novel neuro-symbolic architecture, Neural Reasoning Networks (NRN), that is scalable and generates logically textual explanations for its predictions.
A training algorithm (R-NRN) learns the weights of the network as usual using descent optimization with backprop, but also learns the network structure itself using a bandit-based optimization.
R-NRN explanations are shorter than the compared approaches while producing more accurate feature importance scores.
arXiv Detail & Related papers (2024-10-10T14:27:12Z) - LinSATNet: The Positive Linear Satisfiability Neural Networks [116.65291739666303]
This paper studies how to introduce the popular positive linear satisfiability to neural networks.
We propose the first differentiable satisfiability layer based on an extension of the classic Sinkhorn algorithm for jointly encoding multiple sets of marginal distributions.
arXiv Detail & Related papers (2024-07-18T22:05:21Z) - Linear Leaky-Integrate-and-Fire Neuron Model Based Spiking Neural
Networks and Its Mapping Relationship to Deep Neural Networks [7.840247953745616]
Spiking neural networks (SNNs) are brain-inspired machine learning algorithms with merits such as biological plausibility and unsupervised learning capability.
This paper establishes a precise mathematical mapping between the biological parameters of the Linear Leaky-Integrate-and-Fire model (LIF)/SNNs and the parameters of ReLU-AN/Deep Neural Networks (DNNs)
arXiv Detail & Related papers (2022-05-31T17:02:26Z) - Knowledge Enhanced Neural Networks for relational domains [83.9217787335878]
We focus on a specific method, KENN, a Neural-Symbolic architecture that injects prior logical knowledge into a neural network.
In this paper, we propose an extension of KENN for relational data.
arXiv Detail & Related papers (2022-05-31T13:00:34Z) - Logic Tensor Networks [9.004005678155023]
We present Logic Networks (LTN), a neurosymbolic formalism and computational model that supports learning and reasoning.
We show that LTN provides a uniform language for the specification and the computation of several AI tasks.
arXiv Detail & Related papers (2020-12-25T22:30:18Z) - The Neural Coding Framework for Learning Generative Models [91.0357317238509]
We propose a novel neural generative model inspired by the theory of predictive processing in the brain.
In a similar way, artificial neurons in our generative model predict what neighboring neurons will do, and adjust their parameters based on how well the predictions matched reality.
arXiv Detail & Related papers (2020-12-07T01:20:38Z) - Neural Logic Reasoning [47.622957656745356]
We propose Logic-Integrated Neural Network (LINN) to integrate the power of deep learning and logic reasoning.
LINN learns basic logical operations such as AND, OR, NOT as neural modules, and conducts propositional logical reasoning through the network for inference.
Experiments show that LINN significantly outperforms state-of-the-art recommendation models in Top-K recommendation.
arXiv Detail & Related papers (2020-08-20T14:53:23Z) - Learning Syllogism with Euler Neural-Networks [20.47827965932698]
The central vector of a ball is a vector that can inherit representation power of traditional neural network.
A novel back-propagation algorithm with six Rectified Spatial Units (ReSU) can optimize an Euler diagram representing logical premises.
In contrast to traditional neural network, ENN can precisely represent all 24 different structures of Syllogism.
arXiv Detail & Related papers (2020-07-14T19:35:35Z) - Towards Understanding Hierarchical Learning: Benefits of Neural
Representations [160.33479656108926]
In this work, we demonstrate that intermediate neural representations add more flexibility to neural networks.
We show that neural representation can achieve improved sample complexities compared with the raw input.
Our results characterize when neural representations are beneficial, and may provide a new perspective on why depth is important in deep learning.
arXiv Detail & Related papers (2020-06-24T02:44:54Z) - Evaluating Logical Generalization in Graph Neural Networks [59.70452462833374]
We study the task of logical generalization using graph neural networks (GNNs)
Our benchmark suite, GraphLog, requires that learning algorithms perform rule induction in different synthetic logics.
We find that the ability for models to generalize and adapt is strongly determined by the diversity of the logical rules they encounter during training.
arXiv Detail & Related papers (2020-03-14T05:45:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.