Causal Incremental Graph Convolution for Recommender System Retraining
- URL: http://arxiv.org/abs/2108.06889v1
- Date: Mon, 16 Aug 2021 04:20:09 GMT
- Title: Causal Incremental Graph Convolution for Recommender System Retraining
- Authors: Sihao Ding, Fuli Feng, Xiangnan He, Yong Liao, Jun Shi, and Yongdong
Zhang
- Abstract summary: Real-world recommender system needs to be regularly retrained to keep with the new data.
In this work, we consider how to efficiently retrain graph convolution network (GCN) based recommender models.
- Score: 89.25922726558875
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Real-world recommender system needs to be regularly retrained to keep with
the new data. In this work, we consider how to efficiently retrain graph
convolution network (GCN) based recommender models, which are state-of-the-art
techniques for collaborative recommendation. To pursue high efficiency, we set
the target as using only new data for model updating, meanwhile not sacrificing
the recommendation accuracy compared with full model retraining. This is
non-trivial to achieve, since the interaction data participates in both the
graph structure for model construction and the loss function for model
learning, whereas the old graph structure is not allowed to use in model
updating. Towards the goal, we propose a \textit{Causal Incremental Graph
Convolution} approach, which consists of two new operators named
\textit{Incremental Graph Convolution} (IGC) and \textit{Colliding Effect
Distillation} (CED) to estimate the output of full graph convolution. In
particular, we devise simple and effective modules for IGC to ingeniously
combine the old representations and the incremental graph and effectively fuse
the long-term and short-term preference signals. CED aims to avoid the
out-of-date issue of inactive nodes that are not in the incremental graph,
which connects the new data with inactive nodes through causal inference. In
particular, CED estimates the causal effect of new data on the representation
of inactive nodes through the control of their collider. Extensive experiments
on three real-world datasets demonstrate both accuracy gains and significant
speed-ups over the existing retraining mechanism.
Related papers
- Graph Unlearning with Efficient Partial Retraining [28.433619085748447]
Graph Neural Networks (GNNs) have achieved remarkable success in various real-world applications.
GNNs may be trained on undesirable graph data, which can degrade their performance and reliability.
We propose GraphRevoker, a novel graph unlearning framework that better maintains the model utility of unlearnable GNNs.
arXiv Detail & Related papers (2024-03-12T06:22:10Z) - Efficient Heterogeneous Graph Learning via Random Projection [58.4138636866903]
Heterogeneous Graph Neural Networks (HGNNs) are powerful tools for deep learning on heterogeneous graphs.
Recent pre-computation-based HGNNs use one-time message passing to transform a heterogeneous graph into regular-shaped tensors.
We propose a hybrid pre-computation-based HGNN, named Random Projection Heterogeneous Graph Neural Network (RpHGNN)
arXiv Detail & Related papers (2023-10-23T01:25:44Z) - Graph Masked Autoencoder for Sequential Recommendation [10.319298705782058]
We propose a Graph Masked AutoEncoder-enhanced sequential Recommender system (MAERec) that adaptively and dynamically distills global item transitional information for self-supervised augmentation.
Our method significantly outperforms state-of-the-art baseline models and can learn more accurate representations against data noise and sparsity.
arXiv Detail & Related papers (2023-05-08T10:57:56Z) - Comprehensive Graph Gradual Pruning for Sparse Training in Graph Neural
Networks [52.566735716983956]
We propose a graph gradual pruning framework termed CGP to dynamically prune GNNs.
Unlike LTH-based methods, the proposed CGP approach requires no re-training, which significantly reduces the computation costs.
Our proposed strategy greatly improves both training and inference efficiency while matching or even exceeding the accuracy of existing methods.
arXiv Detail & Related papers (2022-07-18T14:23:31Z) - A Graph Data Augmentation Strategy with Entropy Preserving [11.886325179121226]
We introduce a novel graph entropy definition as a quantitative index to evaluate feature information among a graph.
Under considerations of preserving graph entropy, we propose an effective strategy to generate training data using a perturbed mechanism.
Our proposed approach significantly enhances the robustness and generalization ability of GCNs during the training process.
arXiv Detail & Related papers (2021-07-13T12:58:32Z) - Data Augmentation for Graph Convolutional Network on Semi-Supervised
Classification [6.619370466850894]
We study the problem of graph data augmentation for Graph Convolutional Network (GCN)
Specifically, we conduct cosine similarity based cross operation on the original features to create new graph features, including new node attributes.
We also propose an attentional integrating model to weighted sum the hidden node embeddings encoded by these GCNs into the final node embeddings.
arXiv Detail & Related papers (2021-06-16T15:13:51Z) - Robust Optimization as Data Augmentation for Large-scale Graphs [117.2376815614148]
We propose FLAG (Free Large-scale Adversarial Augmentation on Graphs), which iteratively augments node features with gradient-based adversarial perturbations during training.
FLAG is a general-purpose approach for graph data, which universally works in node classification, link prediction, and graph classification tasks.
arXiv Detail & Related papers (2020-10-19T21:51:47Z) - Heuristic Semi-Supervised Learning for Graph Generation Inspired by
Electoral College [80.67842220664231]
We propose a novel pre-processing technique, namely ELectoral COllege (ELCO), which automatically expands new nodes and edges to refine the label similarity within a dense subgraph.
In all setups tested, our method boosts the average score of base models by a large margin of 4.7 points, as well as consistently outperforms the state-of-the-art.
arXiv Detail & Related papers (2020-06-10T14:48:48Z) - Revisiting Graph based Collaborative Filtering: A Linear Residual Graph
Convolutional Network Approach [55.44107800525776]
Graph Convolutional Networks (GCNs) are state-of-the-art graph based representation learning models.
In this paper, we revisit GCN based Collaborative Filtering (CF) based Recommender Systems (RS)
We show that removing non-linearities would enhance recommendation performance, consistent with the theories in simple graph convolutional networks.
We propose a residual network structure that is specifically designed for CF with user-item interaction modeling.
arXiv Detail & Related papers (2020-01-28T04:41:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.