Meta-Inductive Node Classification across Graphs
- URL: http://arxiv.org/abs/2105.06725v1
- Date: Fri, 14 May 2021 09:16:28 GMT
- Title: Meta-Inductive Node Classification across Graphs
- Authors: Zhihao Wen, Yuan Fang, Zemin Liu
- Abstract summary: We propose a novel meta-inductive framework called MI-GNN to customize the inductive model to each graph.
MI-GNN does not directly learn an inductive model; it learns the general knowledge of how to train a model for semi-supervised node classification on new graphs.
Extensive experiments on five real-world graph collections demonstrate the effectiveness of our proposed model.
- Score: 6.0471030308057285
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Semi-supervised node classification on graphs is an important research
problem, with many real-world applications in information retrieval such as
content classification on a social network and query intent classification on
an e-commerce query graph. While traditional approaches are largely
transductive, recent graph neural networks (GNNs) integrate node features with
network structures, thus enabling inductive node classification models that can
be applied to new nodes or even new graphs in the same feature space. However,
inter-graph differences still exist across graphs within the same domain. Thus,
training just one global model (e.g., a state-of-the-art GNN) to handle all new
graphs, whilst ignoring the inter-graph differences, can lead to suboptimal
performance.
In this paper, we study the problem of inductive node classification across
graphs. Unlike existing one-model-fits-all approaches, we propose a novel
meta-inductive framework called MI-GNN to customize the inductive model to each
graph under a meta-learning paradigm. That is, MI-GNN does not directly learn
an inductive model; it learns the general knowledge of how to train a model for
semi-supervised node classification on new graphs. To cope with the differences
across graphs, MI-GNN employs a dual adaptation mechanism at both the graph and
task levels. More specifically, we learn a graph prior to adapt for the
graph-level differences, and a task prior to adapt for the task-level
differences conditioned on a graph. Extensive experiments on five real-world
graph collections demonstrate the effectiveness of our proposed model.
Related papers
- GraphAny: A Foundation Model for Node Classification on Any Graph [18.90340185554506]
Foundation models that can perform inference on any new task without requiring specific training have revolutionized machine learning in vision and language applications.
In this work, we tackle two challenges with a new foundational architecture for inductive node classification named GraphAny.
Specifically, we learn attention scores for each node to fuse the predictions of multiple LinearGNNs to ensure generalization to new graphs.
Empirically, GraphAny trained on the Wisconsin dataset with only 120 labeled nodes can effectively generalize to 30 new graphs with an average accuracy of 67.26% in an inductive manner.
arXiv Detail & Related papers (2024-05-30T19:43:29Z) - Semi-Supervised Hierarchical Graph Classification [54.25165160435073]
We study the node classification problem in the hierarchical graph where a 'node' is a graph instance.
We propose the Hierarchical Graph Mutual Information (HGMI) and present a way to compute HGMI with theoretical guarantee.
We demonstrate the effectiveness of this hierarchical graph modeling and the proposed SEAL-CI method on text and social network data.
arXiv Detail & Related papers (2022-06-11T04:05:29Z) - Discovering the Representation Bottleneck of Graph Neural Networks from
Multi-order Interactions [51.597480162777074]
Graph neural networks (GNNs) rely on the message passing paradigm to propagate node features and build interactions.
Recent works point out that different graph learning tasks require different ranges of interactions between nodes.
We study two common graph construction methods in scientific domains, i.e., emphK-nearest neighbor (KNN) graphs and emphfully-connected (FC) graphs.
arXiv Detail & Related papers (2022-05-15T11:38:14Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - Neighborhood Random Walk Graph Sampling for Regularized Bayesian Graph
Convolutional Neural Networks [0.6236890292833384]
In this paper, we propose a novel algorithm called Bayesian Graph Convolutional Network using Neighborhood Random Walk Sampling (BGCN-NRWS)
BGCN-NRWS uses a Markov Chain Monte Carlo (MCMC) based graph sampling algorithm utilizing graph structure, reduces overfitting by using a variational inference layer, and yields consistently competitive classification results compared to the state-of-the-art in semi-supervised node classification.
arXiv Detail & Related papers (2021-12-14T20:58:27Z) - Line Graph Neural Networks for Link Prediction [71.00689542259052]
We consider the graph link prediction task, which is a classic graph analytical problem with many real-world applications.
In this formalism, a link prediction problem is converted to a graph classification task.
We propose to seek a radically different and novel path by making use of the line graphs in graph theory.
In particular, each node in a line graph corresponds to a unique edge in the original graph. Therefore, link prediction problems in the original graph can be equivalently solved as a node classification problem in its corresponding line graph, instead of a graph classification task.
arXiv Detail & Related papers (2020-10-20T05:54:31Z) - Lifelong Graph Learning [6.282881904019272]
We bridge graph learning and lifelong learning by converting a continual graph learning problem to a regular graph learning problem.
We show that feature graph networks (FGN) achieve superior performance in two applications, i.e., lifelong human action recognition with wearable devices and feature matching.
arXiv Detail & Related papers (2020-09-01T18:21:34Z) - Multilevel Graph Matching Networks for Deep Graph Similarity Learning [79.3213351477689]
We propose a multi-level graph matching network (MGMN) framework for computing the graph similarity between any pair of graph-structured objects.
To compensate for the lack of standard benchmark datasets, we have created and collected a set of datasets for both the graph-graph classification and graph-graph regression tasks.
Comprehensive experiments demonstrate that MGMN consistently outperforms state-of-the-art baseline models on both the graph-graph classification and graph-graph regression tasks.
arXiv Detail & Related papers (2020-07-08T19:48:19Z) - XGNN: Towards Model-Level Explanations of Graph Neural Networks [113.51160387804484]
Graphs neural networks (GNNs) learn node features by aggregating and combining neighbor information.
GNNs are mostly treated as black-boxes and lack human intelligible explanations.
We propose a novel approach, known as XGNN, to interpret GNNs at the model-level.
arXiv Detail & Related papers (2020-06-03T23:52:43Z) - Customized Graph Neural Networks [38.30640892828196]
Graph Neural Networks (GNNs) have greatly advanced the task of graph classification.
We propose a novel customized graph neural network framework, i.e., Customized-GNN.
The proposed framework is very general that can be applied to numerous existing graph neural network models.
arXiv Detail & Related papers (2020-05-22T05:22:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.