Lifelong Graph Learning
- URL: http://arxiv.org/abs/2009.00647v4
- Date: Sat, 26 Mar 2022 18:45:54 GMT
- Title: Lifelong Graph Learning
- Authors: Chen Wang, Yuheng Qiu, Dasong Gao, Sebastian Scherer
- Abstract summary: We bridge graph learning and lifelong learning by converting a continual graph learning problem to a regular graph learning problem.
We show that feature graph networks (FGN) achieve superior performance in two applications, i.e., lifelong human action recognition with wearable devices and feature matching.
- Score: 6.282881904019272
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph neural networks (GNN) are powerful models for many graph-structured
tasks. Existing models often assume that the complete structure of the graph is
available during training. In practice, however, graph-structured data is
usually formed in a streaming fashion so that learning a graph continuously is
often necessary. In this paper, we bridge GNN and lifelong learning by
converting a continual graph learning problem to a regular graph learning
problem so GNN can inherit the lifelong learning techniques developed for
convolutional neural networks (CNN). We propose a new topology, the feature
graph, which takes features as new nodes and turns nodes into independent
graphs. This successfully converts the original problem of node classification
to graph classification. In the experiments, we demonstrate the efficiency and
effectiveness of feature graph networks (FGN) by continuously learning a
sequence of classical graph datasets. We also show that FGN achieves superior
performance in two applications, i.e., lifelong human action recognition with
wearable devices and feature matching. To the best of our knowledge, FGN is the
first method to bridge graph learning and lifelong learning via a novel graph
topology. Source code is available at https://github.com/wang-chen/LGL
Related papers
- A Topology-aware Graph Coarsening Framework for Continual Graph Learning [8.136809136959302]
Continual learning on graphs tackles the problem of training a graph neural network (GNN) where graph data arrive in a streaming fashion.
Traditional continual learning strategies such as Experience Replay can be adapted to streaming graphs.
We propose TA$mathbbCO$, a (t)opology-(a)ware graph (co)arsening and (co)ntinual learning framework.
arXiv Detail & Related papers (2024-01-05T22:22:13Z) - Training Graph Neural Networks on Growing Stochastic Graphs [114.75710379125412]
Graph Neural Networks (GNNs) rely on graph convolutions to exploit meaningful patterns in networked data.
We propose to learn GNNs on very large graphs by leveraging the limit object of a sequence of growing graphs, the graphon.
arXiv Detail & Related papers (2022-10-27T16:00:45Z) - Graph-level Neural Networks: Current Progress and Future Directions [61.08696673768116]
Graph-level Neural Networks (GLNNs, deep learning-based graph-level learning methods) have been attractive due to their superiority in modeling high-dimensional data.
We propose a systematic taxonomy covering GLNNs upon deep neural networks, graph neural networks, and graph pooling.
arXiv Detail & Related papers (2022-05-31T06:16:55Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - Learning to Evolve on Dynamic Graphs [5.1521870302904125]
Learning to Evolve on Dynamic Graphs (LEDG) is a novel algorithm that jointly learns graph information and time information.
LEDG is model-agnostic and can train any message passing based graph neural network (GNN) on dynamic graphs.
arXiv Detail & Related papers (2021-11-13T04:09:30Z) - Increase and Conquer: Training Graph Neural Networks on Growing Graphs [116.03137405192356]
We consider the problem of learning a graphon neural network (WNN) by training GNNs on graphs sampled Bernoulli from the graphon.
Inspired by these results, we propose an algorithm to learn GNNs on large-scale graphs that, starting from a moderate number of nodes, successively increases the size of the graph during training.
arXiv Detail & Related papers (2021-06-07T15:05:59Z) - GraphTheta: A Distributed Graph Neural Network Learning System With
Flexible Training Strategy [5.466414428765544]
We present a new distributed graph learning system GraphTheta.
It supports multiple training strategies and enables efficient and scalable learning on big graphs.
This work represents the largest edge-attributed GNN learning task conducted on a billion-scale network in the literature.
arXiv Detail & Related papers (2021-04-21T14:51:33Z) - Co-embedding of Nodes and Edges with Graph Neural Networks [13.020745622327894]
Graph embedding is a way to transform and encode the data structure in high dimensional and non-Euclidean feature space.
CensNet is a general graph embedding framework, which embeds both nodes and edges to a latent feature space.
Our approach achieves or matches the state-of-the-art performance in four graph learning tasks.
arXiv Detail & Related papers (2020-10-25T22:39:31Z) - Graph Contrastive Learning with Augmentations [109.23158429991298]
We propose a graph contrastive learning (GraphCL) framework for learning unsupervised representations of graph data.
We show that our framework can produce graph representations of similar or better generalizability, transferrability, and robustness compared to state-of-the-art methods.
arXiv Detail & Related papers (2020-10-22T20:13:43Z) - GraphCrop: Subgraph Cropping for Graph Classification [36.33477716380905]
We develop the textbfGraphCrop (Subgraph Cropping) data augmentation method to simulate the real-world noise of sub-structure omission.
By preserving the valid structure contexts for graph classification, we encourage GNNs to understand the content of graph structures in a global sense.
arXiv Detail & Related papers (2020-09-22T14:05:41Z) - XGNN: Towards Model-Level Explanations of Graph Neural Networks [113.51160387804484]
Graphs neural networks (GNNs) learn node features by aggregating and combining neighbor information.
GNNs are mostly treated as black-boxes and lack human intelligible explanations.
We propose a novel approach, known as XGNN, to interpret GNNs at the model-level.
arXiv Detail & Related papers (2020-06-03T23:52:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.