Towards Robust Graph Incremental Learning on Evolving Graphs
- URL: http://arxiv.org/abs/2402.12987v1
- Date: Tue, 20 Feb 2024 13:17:37 GMT
- Title: Towards Robust Graph Incremental Learning on Evolving Graphs
- Authors: Junwei Su, Difan Zou, Zijun Zhang, Chuan Wu
- Abstract summary: We focus on the inductive NGIL problem, which accounts for the evolution of graph structure (structural shift) induced by emerging tasks.
We propose a novel regularization-based technique called Structural-Shift-Risk-Mitigation (SSRM) to mitigate the impact of the structural shift on catastrophic forgetting.
- Score: 23.595295175930335
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Incremental learning is a machine learning approach that involves training a
model on a sequence of tasks, rather than all tasks at once. This ability to
learn incrementally from a stream of tasks is crucial for many real-world
applications. However, incremental learning is a challenging problem on
graph-structured data, as many graph-related problems involve prediction tasks
for each individual node, known as Node-wise Graph Incremental Learning (NGIL).
This introduces non-independent and non-identically distributed characteristics
in the sample data generation process, making it difficult to maintain the
performance of the model as new tasks are added. In this paper, we focus on the
inductive NGIL problem, which accounts for the evolution of graph structure
(structural shift) induced by emerging tasks. We provide a formal formulation
and analysis of the problem, and propose a novel regularization-based technique
called Structural-Shift-Risk-Mitigation (SSRM) to mitigate the impact of the
structural shift on catastrophic forgetting of the inductive NGIL problem. We
show that the structural shift can lead to a shift in the input distribution
for the existing tasks, and further lead to an increased risk of catastrophic
forgetting. Through comprehensive empirical studies with several benchmark
datasets, we demonstrate that our proposed method,
Structural-Shift-Risk-Mitigation (SSRM), is flexible and easy to adapt to
improve the performance of state-of-the-art GNN incremental learning frameworks
in the inductive setting.
Related papers
- HGMP:Heterogeneous Graph Multi-Task Prompt Learning [18.703129208282913]
We propose a novel multi-task prompt framework for the heterogeneous graph domain, named HGMP.<n>First, to bridge the gap between the pre-trained model and downstream tasks, we reformulate all downstream tasks into a unified graph-level task format.<n>We design a graph-level contrastive pre-training strategy to better leverage heterogeneous information and enhance performance in multi-task scenarios.
arXiv Detail & Related papers (2025-07-10T04:01:47Z) - Learning Causal Graphs at Scale: A Foundation Model Approach [28.966180222166766]
We propose Attention-DAG (ADAG), a novel attention-mechanism-based architecture for learning multiple linear Structural Equation Models (SEMs)<n>ADAG learns the mapping from observed data to both graph structure and parameters via a nonlinear attention-based kernel.<n>We evaluate our proposed approach on benchmark synthetic datasets and find that ADAG achieves substantial improvements in both DAG learning accuracy and zero-shot inference efficiency.
arXiv Detail & Related papers (2025-06-23T04:41:02Z) - Learning to Model Graph Structural Information on MLPs via Graph Structure Self-Contrasting [50.181824673039436]
We propose a Graph Structure Self-Contrasting (GSSC) framework that learns graph structural information without message passing.
The proposed framework is based purely on Multi-Layer Perceptrons (MLPs), where the structural information is only implicitly incorporated as prior knowledge.
It first applies structural sparsification to remove potentially uninformative or noisy edges in the neighborhood, and then performs structural self-contrasting in the sparsified neighborhood to learn robust node representations.
arXiv Detail & Related papers (2024-09-09T12:56:02Z) - Core Knowledge Learning Framework for Graph Adaptation and Scalability Learning [7.239264041183283]
Graph classification faces several hurdles, including adapting to diverse prediction tasks, training across multiple target domains, and handling small-sample prediction scenarios.
By incorporating insights from various types of tasks, our method aims to enhance adaptability, scalability, and generalizability in graph classification.
Experimental results demonstrate significant performance enhancements achieved by our method compared to state-of-the-art approaches.
arXiv Detail & Related papers (2024-07-02T02:16:43Z) - Introducing Diminutive Causal Structure into Graph Representation Learning [19.132025125620274]
We introduce a novel method that enables Graph Neural Networks (GNNs) to glean insights from specialized diminutive causal structures.
Our method specifically extracts causal knowledge from the model representation of these diminutive causal structures.
arXiv Detail & Related papers (2024-06-13T00:18:20Z) - Can Graph Learning Improve Planning in LLM-based Agents? [61.47027387839096]
Task planning in language agents is emerging as an important research topic alongside the development of large language models (LLMs)
In this paper, we explore graph learning-based methods for task planning, a direction that is to the prevalent focus on prompt design.
Our interest in graph learning stems from a theoretical discovery: the biases of attention and auto-regressive loss impede LLMs' ability to effectively navigate decision-making on graphs.
arXiv Detail & Related papers (2024-05-29T14:26:24Z) - Graph Learning under Distribution Shifts: A Comprehensive Survey on
Domain Adaptation, Out-of-distribution, and Continual Learning [53.81365215811222]
We provide a review and summary of the latest approaches, strategies, and insights that address distribution shifts within the context of graph learning.
We categorize existing graph learning methods into several essential scenarios, including graph domain adaptation learning, graph out-of-distribution learning, and graph continual learning.
We discuss the potential applications and future directions for graph learning under distribution shifts with a systematic analysis of the current state in this field.
arXiv Detail & Related papers (2024-02-26T07:52:40Z) - HetGPT: Harnessing the Power of Prompt Tuning in Pre-Trained
Heterogeneous Graph Neural Networks [24.435068514392487]
HetGPT is a post-training prompting framework for graph neural networks.
It improves the performance of state-of-the-art HGNNs on semi-supervised node classification.
arXiv Detail & Related papers (2023-10-23T19:35:57Z) - GIF: A General Graph Unlearning Strategy via Influence Function [63.52038638220563]
Graph Influence Function (GIF) is a model-agnostic unlearning method that can efficiently and accurately estimate parameter changes in response to a $epsilon$-mass perturbation in deleted data.
We conduct extensive experiments on four representative GNN models and three benchmark datasets to justify GIF's superiority in terms of unlearning efficacy, model utility, and unlearning efficiency.
arXiv Detail & Related papers (2023-04-06T03:02:54Z) - Handling Distribution Shifts on Graphs: An Invariance Perspective [78.31180235269035]
We formulate the OOD problem on graphs and develop a new invariant learning approach, Explore-to-Extrapolate Risk Minimization (EERM)
EERM resorts to multiple context explorers that are adversarially trained to maximize the variance of risks from multiple virtual environments.
We prove the validity of our method by theoretically showing its guarantee of a valid OOD solution.
arXiv Detail & Related papers (2022-02-05T02:31:01Z) - Towards Unsupervised Deep Graph Structure Learning [67.58720734177325]
We propose an unsupervised graph structure learning paradigm, where the learned graph topology is optimized by data itself without any external guidance.
Specifically, we generate a learning target from the original data as an "anchor graph", and use a contrastive loss to maximize the agreement between the anchor graph and the learned graph.
arXiv Detail & Related papers (2022-01-17T11:57:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.