Self-Supervised Continual Graph Learning in Adaptive Riemannian Spaces
- URL: http://arxiv.org/abs/2211.17068v2
- Date: Fri, 2 Jun 2023 13:03:18 GMT
- Title: Self-Supervised Continual Graph Learning in Adaptive Riemannian Spaces
- Authors: Li Sun, Junda Ye, Hao Peng, Feiyang Wang, Philip S. Yu
- Abstract summary: Continual graph learning routinely finds its role in a variety of real-world applications where the graph data with different tasks come sequentially.
Existing methods work with the zero-curvature Euclidean space, and largely ignore the fact that curvature varies over the coming graph sequence.
To address the aforementioned challenges, we propose to explore a challenging yet practical problem, the self-supervised continual graph learning.
- Score: 74.03252813800334
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Continual graph learning routinely finds its role in a variety of real-world
applications where the graph data with different tasks come sequentially.
Despite the success of prior works, it still faces great challenges. On the one
hand, existing methods work with the zero-curvature Euclidean space, and
largely ignore the fact that curvature varies over the coming graph sequence.
On the other hand, continual learners in the literature rely on abundant
labels, but labeling graph in practice is particularly hard especially for the
continuously emerging graphs on-the-fly. To address the aforementioned
challenges, we propose to explore a challenging yet practical problem, the
self-supervised continual graph learning in adaptive Riemannian spaces. In this
paper, we propose a novel self-supervised Riemannian Graph Continual Learner
(RieGrace). In RieGrace, we first design an Adaptive Riemannian GCN (AdaRGCN),
a unified GCN coupled with a neural curvature adapter, so that Riemannian space
is shaped by the learnt curvature adaptive to each graph. Then, we present a
Label-free Lorentz Distillation approach, in which we create teacher-student
AdaRGCN for the graph sequence. The student successively performs
intra-distillation from itself and inter-distillation from the teacher so as to
consolidate knowledge without catastrophic forgetting. In particular, we
propose a theoretically grounded Generalized Lorentz Projection for the
contrastive distillation in Riemannian space. Extensive experiments on the
benchmark datasets show the superiority of RieGrace, and additionally, we
investigate on how curvature changes over the graph sequence.
Related papers
- Do We Really Need Graph Convolution During Training? Light Post-Training Graph-ODE for Efficient Recommendation [34.93725892725111]
Graph convolution networks (GCNs) in training recommender systems (RecSys) have been persistent concerns.
This paper presents a critical examination of the necessity of graph convolutions during the training phase.
We introduce an innovative alternative: the Light Post-Training Graph Ordinary-Differential-Equation (LightGODE)
arXiv Detail & Related papers (2024-07-26T17:59:32Z) - DeepRicci: Self-supervised Graph Structure-Feature Co-Refinement for
Alleviating Over-squashing [72.70197960100677]
Graph Structure Learning (GSL) plays an important role in boosting Graph Neural Networks (GNNs) with a refined graph.
GSL solutions usually focus on structure refinement with task-specific supervision (i.e., node classification) or overlook the inherent weakness of GNNs themselves.
We propose to study self-supervised graph structure-feature co-refinement for effectively alleviating the issue of over-squashing in typical GNNs.
arXiv Detail & Related papers (2024-01-23T14:06:08Z) - Motif-aware Riemannian Graph Neural Network with Generative-Contrastive
Learning [23.041843981988503]
We present a novel method for capturing motif regularity in a diverse-curvature manifold without labels.
We also introduce a motif-aware generative-contrastive learning to capture motif regularity in the constructed manifold.
Empirical results show the superiority of MofitRGC over D-GCN.
arXiv Detail & Related papers (2024-01-02T14:58:26Z) - Contrastive Graph Clustering in Curvature Spaces [74.03252813800334]
We present a novel end-to-end contrastive graph clustering model named CONGREGATE.
To support geometric clustering, we construct a theoretically grounded Heterogeneous Curvature Space.
We then train the graph clusters by an augmentation-free reweighted contrastive approach.
arXiv Detail & Related papers (2023-05-05T14:04:52Z) - A Self-supervised Riemannian GNN with Time Varying Curvature for
Temporal Graph Learning [79.20249985327007]
We present a novel self-supervised Riemannian graph neural network (SelfRGNN)
Specifically, we design a curvature-varying GNN with a theoretically grounded time encoding, and formulate a functional curvature over time to model the evolvement shifting among the positive, zero and negative curvature spaces.
Extensive experiments show the superiority of SelfRGNN, and moreover, the case study shows the time-varying curvature of temporal graph in reality.
arXiv Detail & Related papers (2022-08-30T08:43:06Z) - Towards Unsupervised Deep Graph Structure Learning [67.58720734177325]
We propose an unsupervised graph structure learning paradigm, where the learned graph topology is optimized by data itself without any external guidance.
Specifically, we generate a learning target from the original data as an "anchor graph", and use a contrastive loss to maximize the agreement between the anchor graph and the learned graph.
arXiv Detail & Related papers (2022-01-17T11:57:29Z) - A Self-supervised Mixed-curvature Graph Neural Network [76.3790248465522]
We present a novel Self-supervised Mixed-curvature Graph Neural Network (SelfMGNN)
We show that SelfMGNN captures the complicated graph structures in reality and outperforms state-of-the-art baselines.
arXiv Detail & Related papers (2021-12-10T08:56:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.