Graph Neural Networks Meet Neural-Symbolic Computing: A Survey and
Perspective
- URL: http://arxiv.org/abs/2003.00330v7
- Date: Sat, 12 Jun 2021 23:05:33 GMT
- Title: Graph Neural Networks Meet Neural-Symbolic Computing: A Survey and
Perspective
- Authors: Luis C. Lamb, Artur Garcez, Marco Gori, Marcelo Prates, Pedro Avelar,
Moshe Vardi
- Abstract summary: We review the state-of-the-art on the use of GNNs as a model of neural-symbolic computing.
This includes the application of GNNs in several domains as well as its relationship to current developments in neural-symbolic computing.
- Score: 8.047921724008278
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural-symbolic computing has now become the subject of interest of both
academic and industry research laboratories. Graph Neural Networks (GNN) have
been widely used in relational and symbolic domains, with widespread
application of GNNs in combinatorial optimization, constraint satisfaction,
relational reasoning and other scientific domains. The need for improved
explainability, interpretability and trust of AI systems in general demands
principled methodologies, as suggested by neural-symbolic computing. In this
paper, we review the state-of-the-art on the use of GNNs as a model of
neural-symbolic computing. This includes the application of GNNs in several
domains as well as its relationship to current developments in neural-symbolic
computing.
Related papers
- Network Representation Learning for Biophysical Neural Network Analysis [0.7262345640500065]
We introduce a novel BNN analysis framework grounded in network representation learning (NRL)
Our framework integrates a new computational graph (CG)-based BNN representation, a bio-inspired graph attention network (BGAN)
BGAN reflects the compositional structure of neurons, including dendrites, somas, and axons, as well as bidirectional information flows between BNN components.
arXiv Detail & Related papers (2024-10-15T11:16:54Z) - Deep Neural Networks via Complex Network Theory: a Perspective [3.1023851130450684]
Deep Neural Networks (DNNs) can be represented as graphs whose links and vertices iteratively process data and solve tasks sub-optimally. Complex Network Theory (CNT), merging statistical physics with graph theory, provides a method for interpreting neural networks by analysing their weights and neuron structures.
In this work, we extend the existing CNT metrics with measures that sample from the DNNs' training distribution, shifting from a purely topological analysis to one that connects with the interpretability of deep learning.
arXiv Detail & Related papers (2024-04-17T08:42:42Z) - Spectral Neural Networks: Approximation Theory and Optimization
Landscape [6.967392207053043]
We present key theoretical aspects of Spectral Neural Network (SNN) training.
First, we present quantitative insights into the tradeoff between the number of neurons and the amount of spectral information a neural network learns.
arXiv Detail & Related papers (2023-10-01T17:03:47Z) - Information Flow in Graph Neural Networks: A Clinical Triage Use Case [49.86931948849343]
Graph Neural Networks (GNNs) have gained popularity in healthcare and other domains due to their ability to process multi-modal and multi-relational graphs.
We investigate how the flow of embedding information within GNNs affects the prediction of links in Knowledge Graphs (KGs)
Our results demonstrate that incorporating domain knowledge into the GNN connectivity leads to better performance than using the same connectivity as the KG or allowing unconstrained embedding propagation.
arXiv Detail & Related papers (2023-09-12T09:18:12Z) - A Hybrid Neural Coding Approach for Pattern Recognition with Spiking
Neural Networks [53.31941519245432]
Brain-inspired spiking neural networks (SNNs) have demonstrated promising capabilities in solving pattern recognition tasks.
These SNNs are grounded on homogeneous neurons that utilize a uniform neural coding for information representation.
In this study, we argue that SNN architectures should be holistically designed to incorporate heterogeneous coding schemes.
arXiv Detail & Related papers (2023-05-26T02:52:12Z) - Transferability of coVariance Neural Networks and Application to
Interpretable Brain Age Prediction using Anatomical Features [119.45320143101381]
Graph convolutional networks (GCN) leverage topology-driven graph convolutional operations to combine information across the graph for inference tasks.
We have studied GCNs with covariance matrices as graphs in the form of coVariance neural networks (VNNs)
VNNs inherit the scale-free data processing architecture from GCNs and here, we show that VNNs exhibit transferability of performance over datasets whose covariance matrices converge to a limit object.
arXiv Detail & Related papers (2023-05-02T22:15:54Z) - Extrapolation and Spectral Bias of Neural Nets with Hadamard Product: a
Polynomial Net Study [55.12108376616355]
The study on NTK has been devoted to typical neural network architectures, but is incomplete for neural networks with Hadamard products (NNs-Hp)
In this work, we derive the finite-width-K formulation for a special class of NNs-Hp, i.e., neural networks.
We prove their equivalence to the kernel regression predictor with the associated NTK, which expands the application scope of NTK.
arXiv Detail & Related papers (2022-09-16T06:36:06Z) - Linear Leaky-Integrate-and-Fire Neuron Model Based Spiking Neural
Networks and Its Mapping Relationship to Deep Neural Networks [7.840247953745616]
Spiking neural networks (SNNs) are brain-inspired machine learning algorithms with merits such as biological plausibility and unsupervised learning capability.
This paper establishes a precise mathematical mapping between the biological parameters of the Linear Leaky-Integrate-and-Fire model (LIF)/SNNs and the parameters of ReLU-AN/Deep Neural Networks (DNNs)
arXiv Detail & Related papers (2022-05-31T17:02:26Z) - Knowledge Enhanced Neural Networks for relational domains [83.9217787335878]
We focus on a specific method, KENN, a Neural-Symbolic architecture that injects prior logical knowledge into a neural network.
In this paper, we propose an extension of KENN for relational data.
arXiv Detail & Related papers (2022-05-31T13:00:34Z) - Theory of Graph Neural Networks: Representation and Learning [44.02161831977037]
Graph Neural Networks (GNNs) have become a popular learning model for prediction tasks on nodes, graphs and configurations of points.
This article summarizes a selection of the emerging theoretical results on approximation and learning properties of widely used message passing GNNs and higher-order GNNs.
arXiv Detail & Related papers (2022-04-16T02:08:50Z) - Stability of Algebraic Neural Networks to Small Perturbations [179.55535781816343]
Algebraic neural networks (AlgNNs) are composed of a cascade of layers each one associated to and algebraic signal model.
We show how any architecture that uses a formal notion of convolution can be stable beyond particular choices of the shift operator.
arXiv Detail & Related papers (2020-10-22T09:10:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.