Incorporating Symbolic Domain Knowledge into Graph Neural Networks
- URL: http://arxiv.org/abs/2010.13900v2
- Date: Fri, 19 Feb 2021 15:55:52 GMT
- Title: Incorporating Symbolic Domain Knowledge into Graph Neural Networks
- Authors: Tirtharaj Dash, Ashwin Srinivasan, Lovekesh Vig
- Abstract summary: Deep neural networks specifically developed for graph-structured data (Graph-based Neural Networks, or GNNs)
We investigate this aspect of GNNs empirically by employing an operation we term "vertex-enrichment" and denote the corresponding GNNs as "VEGNNs"
- Score: 18.798760815214877
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Our interest is in scientific problems with the following characteristics:
(1) Data are naturally represented as graphs; (2) The amount of data available
is typically small; and (3) There is significant domain-knowledge, usually
expressed in some symbolic form. These kinds of problems have been addressed
effectively in the past by Inductive Logic Programming (ILP), by virtue of 2
important characteristics: (a) The use of a representation language that easily
captures the relation encoded in graph-structured data, and (b) The inclusion
of prior information encoded as domain-specific relations, that can alleviate
problems of data scarcity, and construct new relations. Recent advances have
seen the emergence of deep neural networks specifically developed for
graph-structured data (Graph-based Neural Networks, or GNNs). While GNNs have
been shown to be able to handle graph-structured data, less has been done to
investigate the inclusion of domain-knowledge. Here we investigate this aspect
of GNNs empirically by employing an operation we term "vertex-enrichment" and
denote the corresponding GNNs as "VEGNNs". Using over 70 real-world datasets
and substantial amounts of symbolic domain-knowledge, we examine the result of
vertex-enrichment across 5 different variants of GNNs. Our results provide
support for the following: (a) Inclusion of domain-knowledge by
vertex-enrichment can significantly improve the performance of a GNN. That is,
the performance VEGNNs is significantly better than GNNs across all GNN
variants; (b) The inclusion of domain-specific relations constructed using ILP
improves the performance of VEGNNs, across all GNN variants. Taken together,
the results provide evidence that it is possible to incorporate symbolic domain
knowledge into a GNN, and that ILP can play an important role in providing
high-level relationships that are not easily discovered by a GNN.
Related papers
- DGNN: Decoupled Graph Neural Networks with Structural Consistency
between Attribute and Graph Embedding Representations [62.04558318166396]
Graph neural networks (GNNs) demonstrate a robust capability for representation learning on graphs with complex structures.
A novel GNNs framework, dubbed Decoupled Graph Neural Networks (DGNN), is introduced to obtain a more comprehensive embedding representation of nodes.
Experimental results conducted on several graph benchmark datasets verify DGNN's superiority in node classification task.
arXiv Detail & Related papers (2024-01-28T06:43:13Z) - Information Flow in Graph Neural Networks: A Clinical Triage Use Case [49.86931948849343]
Graph Neural Networks (GNNs) have gained popularity in healthcare and other domains due to their ability to process multi-modal and multi-relational graphs.
We investigate how the flow of embedding information within GNNs affects the prediction of links in Knowledge Graphs (KGs)
Our results demonstrate that incorporating domain knowledge into the GNN connectivity leads to better performance than using the same connectivity as the KG or allowing unconstrained embedding propagation.
arXiv Detail & Related papers (2023-09-12T09:18:12Z) - DEGREE: Decomposition Based Explanation For Graph Neural Networks [55.38873296761104]
We propose DEGREE to provide a faithful explanation for GNN predictions.
By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction.
We also design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods.
arXiv Detail & Related papers (2023-05-22T10:29:52Z) - GNN-Ensemble: Towards Random Decision Graph Neural Networks [3.7620848582312405]
Graph Neural Networks (GNNs) have enjoyed wide spread applications in graph-structured data.
GNNs are required to learn latent patterns from a limited amount of training data to perform inferences on a vast amount of test data.
In this paper, we push one step forward on the ensemble learning of GNNs with improved accuracy, robustness, and adversarial attacks.
arXiv Detail & Related papers (2023-03-20T18:24:01Z) - Graph Neural Networks for Graphs with Heterophily: A Survey [98.45621222357397]
We provide a comprehensive review of graph neural networks (GNNs) for heterophilic graphs.
Specifically, we propose a systematic taxonomy that essentially governs existing heterophilic GNN models.
We discuss the correlation between graph heterophily and various graph research domains, aiming to facilitate the development of more effective GNNs.
arXiv Detail & Related papers (2022-02-14T23:07:47Z) - Identity-aware Graph Neural Networks [63.6952975763946]
We develop a class of message passing Graph Neural Networks (ID-GNNs) with greater expressive power than the 1-WL test.
ID-GNN extends existing GNN architectures by inductively considering nodes' identities during message passing.
We show that transforming existing GNNs to ID-GNNs yields on average 40% accuracy improvement on challenging node, edge, and graph property prediction tasks.
arXiv Detail & Related papers (2021-01-25T18:59:01Z) - Eigen-GNN: A Graph Structure Preserving Plug-in for GNNs [95.63153473559865]
Graph Neural Networks (GNNs) are emerging machine learning models on graphs.
Most existing GNN models in practice are shallow and essentially feature-centric.
We show empirically and analytically that the existing shallow GNNs cannot preserve graph structures well.
We propose Eigen-GNN, a plug-in module to boost GNNs ability in preserving graph structures.
arXiv Detail & Related papers (2020-06-08T02:47:38Z) - Generalization and Representational Limits of Graph Neural Networks [46.20253808402385]
We prove that several important graph properties cannot be computed by graph neural networks (GNNs) that rely entirely on local information.
We provide the first data dependent generalization bounds for message passing GNNs.
Our bounds are much tighter than existing VC-dimension based guarantees for GNNs, and are comparable to Rademacher bounds for recurrent neural networks.
arXiv Detail & Related papers (2020-02-14T18:10:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.