Customized Graph Neural Networks
- URL: http://arxiv.org/abs/2005.12386v2
- Date: Tue, 14 Dec 2021 06:24:21 GMT
- Title: Customized Graph Neural Networks
- Authors: Yiqi Wang, Yao Ma, Wei Jin, Chaozhuo Li, Charu Aggarwal, Jiliang Tang
- Abstract summary: Graph Neural Networks (GNNs) have greatly advanced the task of graph classification.
We propose a novel customized graph neural network framework, i.e., Customized-GNN.
The proposed framework is very general that can be applied to numerous existing graph neural network models.
- Score: 38.30640892828196
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, Graph Neural Networks (GNNs) have greatly advanced the task of
graph classification. Typically, we first build a unified GNN model with graphs
in a given training set and then use this unified model to predict labels of
all the unseen graphs in the test set. However, graphs in the same dataset
often have dramatically distinct structures, which indicates that a unified
model may be sub-optimal given an individual graph. Therefore, in this paper,
we aim to develop customized graph neural networks for graph classification.
Specifically, we propose a novel customized graph neural network framework,
i.e., Customized-GNN. Given a graph sample, Customized-GNN can generate a
sample-specific model for this graph based on its structure. Meanwhile, the
proposed framework is very general that can be applied to numerous existing
graph neural network models. Comprehensive experiments on various graph
classification benchmarks demonstrate the effectiveness of the proposed
framework.
Related papers
- GraphGLOW: Universal and Generalizable Structure Learning for Graph
Neural Networks [72.01829954658889]
This paper introduces the mathematical definition of this novel problem setting.
We devise a general framework that coordinates a single graph-shared structure learner and multiple graph-specific GNNs.
The well-trained structure learner can directly produce adaptive structures for unseen target graphs without any fine-tuning.
arXiv Detail & Related papers (2023-06-20T03:33:22Z) - Graph Generative Model for Benchmarking Graph Neural Networks [73.11514658000547]
We introduce a novel graph generative model that learns and reproduces the distribution of real-world graphs in a privacy-controlled way.
Our model can successfully generate privacy-controlled, synthetic substitutes of large-scale real-world graphs that can be effectively used to benchmark GNN models.
arXiv Detail & Related papers (2022-07-10T06:42:02Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - Neighborhood Random Walk Graph Sampling for Regularized Bayesian Graph
Convolutional Neural Networks [0.6236890292833384]
In this paper, we propose a novel algorithm called Bayesian Graph Convolutional Network using Neighborhood Random Walk Sampling (BGCN-NRWS)
BGCN-NRWS uses a Markov Chain Monte Carlo (MCMC) based graph sampling algorithm utilizing graph structure, reduces overfitting by using a variational inference layer, and yields consistently competitive classification results compared to the state-of-the-art in semi-supervised node classification.
arXiv Detail & Related papers (2021-12-14T20:58:27Z) - IV-GNN : Interval Valued Data Handling Using Graph Neural Network [12.651341660194534]
Graph Neural Network (GNN) is a powerful tool to perform standard machine learning on graphs.
This article proposes an Interval-ValuedGraph Neural Network, a novel GNN model where, for the first time, we relax the restriction of the feature space being countable.
Our model is much more general than existing models as any countable set is always a subset of the universal set $Rn$, which is uncountable.
arXiv Detail & Related papers (2021-11-17T15:37:09Z) - Meta-Inductive Node Classification across Graphs [6.0471030308057285]
We propose a novel meta-inductive framework called MI-GNN to customize the inductive model to each graph.
MI-GNN does not directly learn an inductive model; it learns the general knowledge of how to train a model for semi-supervised node classification on new graphs.
Extensive experiments on five real-world graph collections demonstrate the effectiveness of our proposed model.
arXiv Detail & Related papers (2021-05-14T09:16:28Z) - GraphSVX: Shapley Value Explanations for Graph Neural Networks [81.83769974301995]
Graph Neural Networks (GNNs) achieve significant performance for various learning tasks on geometric data.
In this paper, we propose a unified framework satisfied by most existing GNN explainers.
We introduce GraphSVX, a post hoc local model-agnostic explanation method specifically designed for GNNs.
arXiv Detail & Related papers (2021-04-18T10:40:37Z) - A Unified View on Graph Neural Networks as Graph Signal Denoising [49.980783124401555]
Graph Neural Networks (GNNs) have risen to prominence in learning representations for graph structured data.
In this work, we establish mathematically that the aggregation processes in a group of representative GNN models can be regarded as solving a graph denoising problem.
We instantiate a novel GNN model, ADA-UGNN, derived from UGNN, to handle graphs with adaptive smoothness across nodes.
arXiv Detail & Related papers (2020-10-05T04:57:18Z) - XGNN: Towards Model-Level Explanations of Graph Neural Networks [113.51160387804484]
Graphs neural networks (GNNs) learn node features by aggregating and combining neighbor information.
GNNs are mostly treated as black-boxes and lack human intelligible explanations.
We propose a novel approach, known as XGNN, to interpret GNNs at the model-level.
arXiv Detail & Related papers (2020-06-03T23:52:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.