GraphSAIL: Graph Structure Aware Incremental Learning for Recommender
Systems
- URL: http://arxiv.org/abs/2008.13517v2
- Date: Wed, 2 Sep 2020 02:56:15 GMT
- Title: GraphSAIL: Graph Structure Aware Incremental Learning for Recommender
Systems
- Authors: Yishi Xu, Yingxue Zhang, Wei Guo, Huifeng Guo, Ruiming Tang, Mark
Coates
- Abstract summary: We develop a Graph Structure Aware Incremental Learning framework, GraphSAIL, to address the commonly experienced catastrophic forgetting problem.
Our approach preserves a user's long-term preference (or an item's long-term property) during incremental model updating.
- Score: 47.51104205511256
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Given the convenience of collecting information through online services,
recommender systems now consume large scale data and play a more important role
in improving user experience. With the recent emergence of Graph Neural
Networks (GNNs), GNN-based recommender models have shown the advantage of
modeling the recommender system as a user-item bipartite graph to learn
representations of users and items. However, such models are expensive to train
and difficult to perform frequent updates to provide the most up-to-date
recommendations. In this work, we propose to update GNN-based recommender
models incrementally so that the computation time can be greatly reduced and
models can be updated more frequently. We develop a Graph Structure Aware
Incremental Learning framework, GraphSAIL, to address the commonly experienced
catastrophic forgetting problem that occurs when training a model in an
incremental fashion. Our approach preserves a user's long-term preference (or
an item's long-term property) during incremental model updating. GraphSAIL
implements a graph structure preservation strategy which explicitly preserves
each node's local structure, global structure, and self-information,
respectively. We argue that our incremental training framework is the first
attempt tailored for GNN based recommender systems and demonstrate its
improvement compared to other incremental learning techniques on two public
datasets. We further verify the effectiveness of our framework on a large-scale
industrial dataset.
Related papers
- GraphPro: Graph Pre-training and Prompt Learning for Recommendation [18.962982290136935]
GraphPro is a framework that incorporates parameter-efficient and dynamic graph pre-training with prompt learning.
Our framework addresses the challenge of evolving user preferences by seamlessly integrating a temporal prompt mechanism and a graph-structural prompt learning mechanism.
arXiv Detail & Related papers (2023-11-28T12:00:06Z) - Instant Representation Learning for Recommendation over Large Dynamic
Graphs [29.41179019520622]
We propose SUPA, a novel graph neural network for dynamic multiplex heterogeneous graphs.
For each new edge, SUPA samples an influenced subgraph, updates the representations of the two interactive nodes, and propagates the interaction information to the sampled subgraph.
To train SUPA incrementally online, we propose InsLearn, an efficient workflow for single-pass training of large dynamic graphs.
arXiv Detail & Related papers (2023-05-22T15:36:10Z) - Data-Free Adversarial Knowledge Distillation for Graph Neural Networks [62.71646916191515]
We propose the first end-to-end framework for data-free adversarial knowledge distillation on graph structured data (DFAD-GNN)
To be specific, our DFAD-GNN employs a generative adversarial network, which mainly consists of three components: a pre-trained teacher model and a student model are regarded as two discriminators, and a generator is utilized for deriving training graphs to distill knowledge from the teacher model into the student model.
Our DFAD-GNN significantly surpasses state-of-the-art data-free baselines in the graph classification task.
arXiv Detail & Related papers (2022-05-08T08:19:40Z) - Towards Unsupervised Deep Graph Structure Learning [67.58720734177325]
We propose an unsupervised graph structure learning paradigm, where the learned graph topology is optimized by data itself without any external guidance.
Specifically, we generate a learning target from the original data as an "anchor graph", and use a contrastive loss to maximize the agreement between the anchor graph and the learned graph.
arXiv Detail & Related papers (2022-01-17T11:57:29Z) - Graph Few-shot Class-incremental Learning [25.94168397283495]
The ability to incrementally learn new classes is vital to all real-world artificial intelligence systems.
In this paper, we investigate the challenging yet practical problem, Graph Few-shot Class-incremental (Graph FCL) problem.
We put forward a Graph Pseudo Incremental Learning paradigm by sampling tasks recurrently from the base classes.
We present a task-sensitive regularizer calculated from task-level attention and node class prototypes to mitigate overfitting onto either novel or base classes.
arXiv Detail & Related papers (2021-12-23T19:46:07Z) - Causal Incremental Graph Convolution for Recommender System Retraining [89.25922726558875]
Real-world recommender system needs to be regularly retrained to keep with the new data.
In this work, we consider how to efficiently retrain graph convolution network (GCN) based recommender models.
arXiv Detail & Related papers (2021-08-16T04:20:09Z) - Graph Information Bottleneck [77.21967740646784]
Graph Neural Networks (GNNs) provide an expressive way to fuse information from network structure and node features.
Inheriting from the general Information Bottleneck (IB), GIB aims to learn the minimal sufficient representation for a given task.
We show that our proposed models are more robust than state-of-the-art graph defense models.
arXiv Detail & Related papers (2020-10-24T07:13:00Z) - Robust Optimization as Data Augmentation for Large-scale Graphs [117.2376815614148]
We propose FLAG (Free Large-scale Adversarial Augmentation on Graphs), which iteratively augments node features with gradient-based adversarial perturbations during training.
FLAG is a general-purpose approach for graph data, which universally works in node classification, link prediction, and graph classification tasks.
arXiv Detail & Related papers (2020-10-19T21:51:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.