Improving Expressivity of GNNs with Subgraph-specific Factor Embedded
Normalization
- URL: http://arxiv.org/abs/2305.19903v3
- Date: Sun, 11 Jun 2023 16:19:56 GMT
- Title: Improving Expressivity of GNNs with Subgraph-specific Factor Embedded
Normalization
- Authors: Kaixuan Chen and Shunyu Liu and Tongtian Zhu and Tongya Zheng and
Haofei Zhang and Zunlei Feng and Jingwen Ye and Mingli Song
- Abstract summary: Graph Neural Networks (GNNs) have emerged as a powerful category of learning architecture for handling graph-structured data.
We propose a dedicated plug-and-play normalization scheme, termed as SUbgraph-sPEcific FactoR Embedded Normalization (SuperNorm)
- Score: 30.86182962089487
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Neural Networks (GNNs) have emerged as a powerful category of learning
architecture for handling graph-structured data. However, existing GNNs
typically ignore crucial structural characteristics in node-induced subgraphs,
which thus limits their expressiveness for various downstream tasks. In this
paper, we strive to strengthen the representative capabilities of GNNs by
devising a dedicated plug-and-play normalization scheme, termed as
SUbgraph-sPEcific FactoR Embedded Normalization (SuperNorm), that explicitly
considers the intra-connection information within each node-induced subgraph.
To this end, we embed the subgraph-specific factor at the beginning and the end
of the standard BatchNorm, as well as incorporate graph instance-specific
statistics for improved distinguishable capabilities. In the meantime, we
provide theoretical analysis to support that, with the elaborated SuperNorm, an
arbitrary GNN is at least as powerful as the 1-WL test in distinguishing
non-isomorphism graphs. Furthermore, the proposed SuperNorm scheme is also
demonstrated to alleviate the over-smoothing phenomenon. Experimental results
related to predictions of graph, node, and link properties on the eight popular
datasets demonstrate the effectiveness of the proposed method. The code is
available at https://github.com/chenchkx/SuperNorm.
Related papers
- SF-GNN: Self Filter for Message Lossless Propagation in Deep Graph Neural Network [38.669815079957566]
Graph Neural Network (GNN) with the main idea of encoding graph structure information of graphs by propagation and aggregation has developed rapidly.
It achieved excellent performance in representation learning of multiple types of graphs such as homogeneous graphs, heterogeneous graphs, and more complex graphs like knowledge graphs.
For the phenomenon of performance degradation in deep GNNs, we propose a new perspective.
arXiv Detail & Related papers (2024-07-03T02:40:39Z) - Iterative Graph Neural Network Enhancement via Frequent Subgraph Mining
of Explanations [0.0]
We formulate an XAI-based model improvement approach for Graph Neural Networks (GNNs) for node classification, called Explanation Enhanced Graph Learning (EEGL)
The goal is to improve predictive performance of GNN using explanations.
EEGL is an iterative self-improving algorithm, which starts with a learned "vanilla" GNN, and repeatedly uses frequent subgraph mining to find relevant patterns in explanation subgraphs.
arXiv Detail & Related papers (2024-03-12T17:41:27Z) - DGNN: Decoupled Graph Neural Networks with Structural Consistency
between Attribute and Graph Embedding Representations [62.04558318166396]
Graph neural networks (GNNs) demonstrate a robust capability for representation learning on graphs with complex structures.
A novel GNNs framework, dubbed Decoupled Graph Neural Networks (DGNN), is introduced to obtain a more comprehensive embedding representation of nodes.
Experimental results conducted on several graph benchmark datasets verify DGNN's superiority in node classification task.
arXiv Detail & Related papers (2024-01-28T06:43:13Z) - Seq-HGNN: Learning Sequential Node Representation on Heterogeneous Graph [57.2953563124339]
We propose a novel heterogeneous graph neural network with sequential node representation, namely Seq-HGNN.
We conduct extensive experiments on four widely used datasets from Heterogeneous Graph Benchmark (HGB) and Open Graph Benchmark (OGB)
arXiv Detail & Related papers (2023-05-18T07:27:18Z) - Towards Better Generalization with Flexible Representation of
Multi-Module Graph Neural Networks [0.27195102129094995]
We use a random graph generator to investigate how the graph size and structural properties affect the predictive performance of GNNs.
We present specific evidence that the average node degree is a key feature in determining whether GNNs can generalize to unseen graphs.
We propose a multi- module GNN framework that allows the network to adapt flexibly to new graphs by generalizing a single canonical nonlinear transformation over aggregated inputs.
arXiv Detail & Related papers (2022-09-14T12:13:59Z) - Adaptive Kernel Graph Neural Network [21.863238974404474]
Graph neural networks (GNNs) have demonstrated great success in representation learning for graph-structured data.
In this paper, we propose a novel framework - i.e., namely Adaptive Kernel Graph Neural Network (AKGNN)
AKGNN learns to adapt to the optimal graph kernel in a unified manner at the first attempt.
Experiments are conducted on acknowledged benchmark datasets and promising results demonstrate the outstanding performance of our proposed AKGNN.
arXiv Detail & Related papers (2021-12-08T20:23:58Z) - Learning to Drop: Robust Graph Neural Network via Topological Denoising [50.81722989898142]
We propose PTDNet, a parameterized topological denoising network, to improve the robustness and generalization performance of Graph Neural Networks (GNNs)
PTDNet prunes task-irrelevant edges by penalizing the number of edges in the sparsified graph with parameterized networks.
We show that PTDNet can improve the performance of GNNs significantly and the performance gain becomes larger for more noisy datasets.
arXiv Detail & Related papers (2020-11-13T18:53:21Z) - Distance Encoding: Design Provably More Powerful Neural Networks for
Graph Representation Learning [63.97983530843762]
Graph Neural Networks (GNNs) have achieved great success in graph representation learning.
GNNs generate identical representations for graph substructures that may in fact be very different.
More powerful GNNs, proposed recently by mimicking higher-order tests, are inefficient as they cannot sparsity of underlying graph structure.
We propose Distance Depiction (DE) as a new class of graph representation learning.
arXiv Detail & Related papers (2020-08-31T23:15:40Z) - Improving Graph Neural Network Expressivity via Subgraph Isomorphism
Counting [63.04999833264299]
"Graph Substructure Networks" (GSN) is a topologically-aware message passing scheme based on substructure encoding.
We show that it is strictly more expressive than the Weisfeiler-Leman (WL) graph isomorphism test.
We perform an extensive evaluation on graph classification and regression tasks and obtain state-of-the-art results in diverse real-world settings.
arXiv Detail & Related papers (2020-06-16T15:30:31Z) - Adaptive Universal Generalized PageRank Graph Neural Network [36.850433364139924]
Graph neural networks (GNNs) are designed to exploit both sources of evidence but they do not optimally trade-off their utility.
We introduce a new Generalized PageRank (GPR) GNN architecture that adaptively learns the GPR weights.
GPR-GNN offers significant performance improvement compared to existing techniques on both synthetic and benchmark data.
arXiv Detail & Related papers (2020-06-14T19:27:39Z) - Eigen-GNN: A Graph Structure Preserving Plug-in for GNNs [95.63153473559865]
Graph Neural Networks (GNNs) are emerging machine learning models on graphs.
Most existing GNN models in practice are shallow and essentially feature-centric.
We show empirically and analytically that the existing shallow GNNs cannot preserve graph structures well.
We propose Eigen-GNN, a plug-in module to boost GNNs ability in preserving graph structures.
arXiv Detail & Related papers (2020-06-08T02:47:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.