Neural Networks Enhancement with Logical Knowledge
- URL: http://arxiv.org/abs/2009.06087v2
- Date: Mon, 18 Oct 2021 11:53:26 GMT
- Title: Neural Networks Enhancement with Logical Knowledge
- Authors: Alessandro Daniele, Luciano Serafini
- Abstract summary: We propose an extension of KENN for relational data.
The results show that KENN is capable of increasing the performances of the underlying neural network even in the presence relational data.
- Score: 83.9217787335878
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the recent past, there has been a growing interest in Neural-Symbolic
Integration frameworks, i.e., hybrid systems that integrate connectionist and
symbolic approaches to obtain the best of both worlds. In a previous work, we
proposed KENN (Knowledge Enhanced Neural Networks), a Neural-Symbolic
architecture that injects prior logical knowledge into a neural network by
adding a new final layer which modifies the initial predictions accordingly to
the knowledge. Among the advantages of this strategy, there is the inclusion of
clause weights, learnable parameters that represent the strength of the
clauses, meaning that the model can learn the impact of each clause on the
final predictions. As a special case, if the training data contradicts a
constraint, KENN learns to ignore it, making the system robust to the presence
of wrong knowledge. In this paper, we propose an extension of KENN for
relational data. To evaluate this new extension, we tested it with different
learning configurations on Citeseer, a standard dataset for Collective
Classification. The results show that KENN is capable of increasing the
performances of the underlying neural network even in the presence relational
data, outperforming other two notable methods that combine learning with logic.
Related papers
- GINN-KAN: Interpretability pipelining with applications in Physics Informed Neural Networks [5.2969467015867915]
We introduce the concept of interpretability pipelineing, to incorporate multiple interpretability techniques to outperform each individual technique.
We evaluate two recent models selected for their potential to incorporate interpretability into standard neural network architectures.
We introduce a novel interpretable neural network GINN-KAN that synthesizes the advantages of both models.
arXiv Detail & Related papers (2024-08-27T04:57:53Z) - Image Classification using Fuzzy Pooling in Convolutional Kolmogorov-Arnold Networks [0.0]
We present an approach that integrates Kolmogorov-Arnold Network (KAN) classification heads and Fuzzy Pooling into convolutional neural networks (CNNs)
Our comparative analysis demonstrates that the modified CNN architecture with KAN and Fuzzy Pooling achieves comparable or higher accuracy than traditional models.
arXiv Detail & Related papers (2024-07-23T08:18:04Z) - Bayesian Neural Networks with Domain Knowledge Priors [52.80929437592308]
We propose a framework for integrating general forms of domain knowledge into a BNN prior.
We show that BNNs using our proposed domain knowledge priors outperform those with standard priors.
arXiv Detail & Related papers (2024-02-20T22:34:53Z) - Neural Attentive Circuits [93.95502541529115]
We introduce a general purpose, yet modular neural architecture called Neural Attentive Circuits (NACs)
NACs learn the parameterization and a sparse connectivity of neural modules without using domain knowledge.
NACs achieve an 8x speedup at inference time while losing less than 3% performance.
arXiv Detail & Related papers (2022-10-14T18:00:07Z) - Robust Knowledge Adaptation for Dynamic Graph Neural Networks [61.8505228728726]
We propose Ada-DyGNN: a robust knowledge Adaptation framework via reinforcement learning for Dynamic Graph Neural Networks.
Our approach constitutes the first attempt to explore robust knowledge adaptation via reinforcement learning.
Experiments on three benchmark datasets demonstrate that Ada-DyGNN achieves the state-of-the-art performance.
arXiv Detail & Related papers (2022-07-22T02:06:53Z) - Linear Leaky-Integrate-and-Fire Neuron Model Based Spiking Neural
Networks and Its Mapping Relationship to Deep Neural Networks [7.840247953745616]
Spiking neural networks (SNNs) are brain-inspired machine learning algorithms with merits such as biological plausibility and unsupervised learning capability.
This paper establishes a precise mathematical mapping between the biological parameters of the Linear Leaky-Integrate-and-Fire model (LIF)/SNNs and the parameters of ReLU-AN/Deep Neural Networks (DNNs)
arXiv Detail & Related papers (2022-05-31T17:02:26Z) - Knowledge Enhanced Neural Networks for relational domains [83.9217787335878]
We focus on a specific method, KENN, a Neural-Symbolic architecture that injects prior logical knowledge into a neural network.
In this paper, we propose an extension of KENN for relational data.
arXiv Detail & Related papers (2022-05-31T13:00:34Z) - Epistemic Neural Networks [25.762699400341972]
In principle, ensemble-based approaches produce effective joint predictions.
But the computational costs of training large ensembles can become prohibitive.
We introduce the epinet: an architecture that can supplement any conventional neural network.
arXiv Detail & Related papers (2021-07-19T14:37:57Z) - PredRNN: A Recurrent Neural Network for Spatiotemporal Predictive
Learning [109.84770951839289]
We present PredRNN, a new recurrent network for learning visual dynamics from historical context.
We show that our approach obtains highly competitive results on three standard datasets.
arXiv Detail & Related papers (2021-03-17T08:28:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.