Distribution Preserving Graph Representation Learning
- URL: http://arxiv.org/abs/2202.13428v1
- Date: Sun, 27 Feb 2022 19:16:26 GMT
- Title: Distribution Preserving Graph Representation Learning
- Authors: Chengsheng Mao and Yuan Luo
- Abstract summary: Graph neural network (GNN) is effective to model graphs for distributed representations of nodes and an entire graph.
We propose Distribution Preserving GNN (DP-GNN) - a GNN framework that can improve the generalizability of expressive GNN models.
We evaluate the proposed DP-GNN framework on multiple benchmark datasets for graph classification tasks.
- Score: 11.340722297341788
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph neural network (GNN) is effective to model graphs for distributed
representations of nodes and an entire graph. Recently, research on the
expressive power of GNN attracted growing attention. A highly-expressive GNN
has the ability to generate discriminative graph representations. However, in
the end-to-end training process for a certain graph learning task, a
highly-expressive GNN risks generating graph representations overfitting the
training data for the target task, while losing information important for the
model generalization. In this paper, we propose Distribution Preserving GNN
(DP-GNN) - a GNN framework that can improve the generalizability of expressive
GNN models by preserving several kinds of distribution information in graph
representations and node representations. Besides the generalizability, by
applying an expressive GNN backbone, DP-GNN can also have high expressive
power. We evaluate the proposed DP-GNN framework on multiple benchmark datasets
for graph classification tasks. The experimental results demonstrate that our
model achieves state-of-the-art performances.
Related papers
- A Manifold Perspective on the Statistical Generalization of Graph Neural Networks [84.01980526069075]
We take a manifold perspective to establish the statistical generalization theory of GNNs on graphs sampled from a manifold in the spectral domain.
We prove that the generalization bounds of GNNs decrease linearly with the size of the graphs in the logarithmic scale, and increase linearly with the spectral continuity constants of the filter functions.
arXiv Detail & Related papers (2024-06-07T19:25:02Z) - GNNEvaluator: Evaluating GNN Performance On Unseen Graphs Without Labels [81.93520935479984]
We study a new problem, GNN model evaluation, that aims to assess the performance of a specific GNN model trained on labeled and observed graphs.
We propose a two-stage GNN model evaluation framework, including (1) DiscGraph set construction and (2) GNNEvaluator training and inference.
Under the effective training supervision from the DiscGraph set, GNNEvaluator learns to precisely estimate node classification accuracy of the to-be-evaluated GNN model.
arXiv Detail & Related papers (2023-10-23T05:51:59Z) - Rethinking the Power of Graph Canonization in Graph Representation
Learning with Stability [29.026197379375557]
This paper proposes to maximize the expressivity of GNNs by graph canonization, then the power of such GNNs is studies from the perspective of model stability.
A stable GNN will map similar graphs to close graph representations in the vectorial space, and the stability of GNNs is critical to generalize their performance to unseen graphs.
A comprehensive set of experiments demonstrates the effectiveness of the proposed method.
arXiv Detail & Related papers (2023-09-01T21:23:04Z) - GNNInterpreter: A Probabilistic Generative Model-Level Explanation for
Graph Neural Networks [25.94529851210956]
We propose a model-agnostic model-level explanation method for different Graph Neural Networks (GNNs) that follow the message passing scheme, GNNInterpreter.
GNNInterpreter learns a probabilistic generative graph distribution that produces the most discriminative graph pattern the GNN tries to detect.
Compared to existing works, GNNInterpreter is more flexible and computationally efficient in generating explanation graphs with different types of node and edge features.
arXiv Detail & Related papers (2022-09-15T07:45:35Z) - Towards Better Generalization with Flexible Representation of
Multi-Module Graph Neural Networks [0.27195102129094995]
We use a random graph generator to investigate how the graph size and structural properties affect the predictive performance of GNNs.
We present specific evidence that the average node degree is a key feature in determining whether GNNs can generalize to unseen graphs.
We propose a multi- module GNN framework that allows the network to adapt flexibly to new graphs by generalizing a single canonical nonlinear transformation over aggregated inputs.
arXiv Detail & Related papers (2022-09-14T12:13:59Z) - MentorGNN: Deriving Curriculum for Pre-Training GNNs [61.97574489259085]
We propose an end-to-end model named MentorGNN that aims to supervise the pre-training process of GNNs across graphs.
We shed new light on the problem of domain adaption on relational data (i.e., graphs) by deriving a natural and interpretable upper bound on the generalization error of the pre-trained GNNs.
arXiv Detail & Related papers (2022-08-21T15:12:08Z) - Distance Encoding: Design Provably More Powerful Neural Networks for
Graph Representation Learning [63.97983530843762]
Graph Neural Networks (GNNs) have achieved great success in graph representation learning.
GNNs generate identical representations for graph substructures that may in fact be very different.
More powerful GNNs, proposed recently by mimicking higher-order tests, are inefficient as they cannot sparsity of underlying graph structure.
We propose Distance Depiction (DE) as a new class of graph representation learning.
arXiv Detail & Related papers (2020-08-31T23:15:40Z) - GPT-GNN: Generative Pre-Training of Graph Neural Networks [93.35945182085948]
Graph neural networks (GNNs) have been demonstrated to be powerful in modeling graph-structured data.
We present the GPT-GNN framework to initialize GNNs by generative pre-training.
We show that GPT-GNN significantly outperforms state-of-the-art GNN models without pre-training by up to 9.1% across various downstream tasks.
arXiv Detail & Related papers (2020-06-27T20:12:33Z) - XGNN: Towards Model-Level Explanations of Graph Neural Networks [113.51160387804484]
Graphs neural networks (GNNs) learn node features by aggregating and combining neighbor information.
GNNs are mostly treated as black-boxes and lack human intelligible explanations.
We propose a novel approach, known as XGNN, to interpret GNNs at the model-level.
arXiv Detail & Related papers (2020-06-03T23:52:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.