A Gaze into the Internal Logic of Graph Neural Networks, with Logic
- URL: http://arxiv.org/abs/2208.03093v1
- Date: Fri, 5 Aug 2022 10:49:21 GMT
- Title: A Gaze into the Internal Logic of Graph Neural Networks, with Logic
- Authors: Paul Tarau (University of North Texas)
- Abstract summary: Graph Neural Networks share several key inference mechanisms with Logic Programming.
We show how to model the information flows involved in learning to infer from the link structure of a graph and the information content of its nodes properties of new nodes.
Our approach will consist in emulating with help of a Prolog program the key information propagation steps of a Graph Neural Network's training and inference stages.
As a practical outcome, we obtain a logic program, that, when seen as machine learning algorithm, performs close to the state of the art on the node property prediction benchmark.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph Neural Networks share with Logic Programming several key relational
inference mechanisms. The datasets on which they are trained and evaluated can
be seen as database facts containing ground terms. This makes possible modeling
their inference mechanisms with equivalent logic programs, to better understand
not just how they propagate information between the entities involved in the
machine learning process but also to infer limits on what can be learned from a
given dataset and how well that might generalize to unseen test data.
This leads us to the key idea of this paper: modeling with the help of a
logic program the information flows involved in learning to infer from the link
structure of a graph and the information content of its nodes properties of new
nodes, given their known connections to nodes with possibly similar properties.
The problem is known as graph node property prediction and our approach will
consist in emulating with help of a Prolog program the key information
propagation steps of a Graph Neural Network's training and inference stages.
We test our a approach on the ogbn-arxiv node property inference benchmark.
To infer class labels for nodes representing papers in a citation network, we
distill the dependency trees of the text associated to each node into directed
acyclic graphs that we encode as ground Prolog terms. Together with the set of
their references to other papers, they become facts in a database on which we
reason with help of a Prolog program that mimics the information propagation in
graph neural networks predicting node properties. In the process, we invent
ground term similarity relations that help infer labels in the test set by
propagating node properties from similar nodes in the training set and we
evaluate their effectiveness in comparison with that of the graph's link
structure. Finally, we implement explanation generators that unveil performance
upper bounds inherent to the dataset.
As a practical outcome, we obtain a logic program, that, when seen as machine
learning algorithm, performs close to the state of the art on the node property
prediction benchmark.
Related papers
- Boolean Product Graph Neural Networks [8.392545965667288]
Graph Neural Networks (GNNs) have recently achieved significant success, with a key operation involving the aggregation of information from neighboring nodes.
This paper proposes a novel Boolean product-based graph residual connection in GNNs to link the latent graph and the original graph.
We validate the proposed method in benchmark datasets and demonstrate its ability to enhance the performance and robustness of GNNs.
arXiv Detail & Related papers (2024-09-21T03:31:33Z) - GNN-LoFI: a Novel Graph Neural Network through Localized Feature-based
Histogram Intersection [51.608147732998994]
Graph neural networks are increasingly becoming the framework of choice for graph-based machine learning.
We propose a new graph neural network architecture that substitutes classical message passing with an analysis of the local distribution of node features.
arXiv Detail & Related papers (2024-01-17T13:04:23Z) - Temporal Graph Network Embedding with Causal Anonymous Walks
Representations [54.05212871508062]
We propose a novel approach for dynamic network representation learning based on Temporal Graph Network.
For evaluation, we provide a benchmark pipeline for the evaluation of temporal network embeddings.
We show the applicability and superior performance of our model in the real-world downstream graph machine learning task provided by one of the top European banks.
arXiv Detail & Related papers (2021-08-19T15:39:52Z) - Explicit Pairwise Factorized Graph Neural Network for Semi-Supervised
Node Classification [59.06717774425588]
We propose the Explicit Pairwise Factorized Graph Neural Network (EPFGNN), which models the whole graph as a partially observed Markov Random Field.
It contains explicit pairwise factors to model output-output relations and uses a GNN backbone to model input-output relations.
We conduct experiments on various datasets, which shows that our model can effectively improve the performance for semi-supervised node classification on graphs.
arXiv Detail & Related papers (2021-07-27T19:47:53Z) - COLOGNE: Coordinated Local Graph Neighborhood Sampling [1.6498361958317633]
replacing discrete unordered objects such as graph nodes by real-valued vectors is at the heart of many approaches to learning from graph data.
We address the problem of learning discrete node embeddings such that the coordinates of the node vector representations are graph nodes.
This opens the door to designing interpretable machine learning algorithms for graphs as all attributes originally present in the nodes are preserved.
arXiv Detail & Related papers (2021-02-09T11:39:06Z) - A Unifying Generative Model for Graph Learning Algorithms: Label
Propagation, Graph Convolutions, and Combinations [39.8498896531672]
Semi-supervised learning on graphs is a widely applicable problem in network science and machine learning.
We develop a Markov random field model for the data generation process of node attributes.
We show that label propagation, a linearized graph convolutional network, and their combination can all be derived as conditional expectations.
arXiv Detail & Related papers (2021-01-19T17:07:08Z) - Node Similarity Preserving Graph Convolutional Networks [51.520749924844054]
Graph Neural Networks (GNNs) explore the graph structure and node features by aggregating and transforming information within node neighborhoods.
We propose SimP-GCN that can effectively and efficiently preserve node similarity while exploiting graph structure.
We validate the effectiveness of SimP-GCN on seven benchmark datasets including three assortative and four disassorative graphs.
arXiv Detail & Related papers (2020-11-19T04:18:01Z) - Graphs, Convolutions, and Neural Networks: From Graph Filters to Graph
Neural Networks [183.97265247061847]
We leverage graph signal processing to characterize the representation space of graph neural networks (GNNs)
We discuss the role of graph convolutional filters in GNNs and show that any architecture built with such filters has the fundamental properties of permutation equivariance and stability to changes in the topology.
We also study the use of GNNs in recommender systems and learning decentralized controllers for robot swarms.
arXiv Detail & Related papers (2020-03-08T13:02:15Z) - Graph Inference Learning for Semi-supervised Classification [50.55765399527556]
We propose a Graph Inference Learning framework to boost the performance of semi-supervised node classification.
For learning the inference process, we introduce meta-optimization on structure relations from training nodes to validation nodes.
Comprehensive evaluations on four benchmark datasets demonstrate the superiority of our proposed GIL when compared against state-of-the-art methods.
arXiv Detail & Related papers (2020-01-17T02:52:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.