GraphPro: Graph Pre-training and Prompt Learning for Recommendation
- URL: http://arxiv.org/abs/2311.16716v5
- Date: Mon, 19 Feb 2024 08:45:48 GMT
- Title: GraphPro: Graph Pre-training and Prompt Learning for Recommendation
- Authors: Yuhao Yang, Lianghao Xia, Da Luo, Kangyi Lin, Chao Huang
- Abstract summary: GraphPro is a framework that incorporates parameter-efficient and dynamic graph pre-training with prompt learning.
Our framework addresses the challenge of evolving user preferences by seamlessly integrating a temporal prompt mechanism and a graph-structural prompt learning mechanism.
- Score: 18.962982290136935
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: GNN-based recommenders have excelled in modeling intricate user-item
interactions through multi-hop message passing. However, existing methods often
overlook the dynamic nature of evolving user-item interactions, which impedes
the adaption to changing user preferences and distribution shifts in newly
arriving data. Thus, their scalability and performances in real-world dynamic
environments are limited. In this study, we propose GraphPro, a framework that
incorporates parameter-efficient and dynamic graph pre-training with prompt
learning. This novel combination empowers GNNs to effectively capture both
long-term user preferences and short-term behavior dynamics, enabling the
delivery of accurate and timely recommendations. Our GraphPro framework
addresses the challenge of evolving user preferences by seamlessly integrating
a temporal prompt mechanism and a graph-structural prompt learning mechanism
into the pre-trained GNN model. The temporal prompt mechanism encodes time
information on user-item interaction, allowing the model to naturally capture
temporal context, while the graph-structural prompt learning mechanism enables
the transfer of pre-trained knowledge to adapt to behavior dynamics without the
need for continuous incremental training. We further bring in a dynamic
evaluation setting for recommendation to mimic real-world dynamic scenarios and
bridge the offline-online gap to a better level. Our extensive experiments
including a large-scale industrial deployment showcases the lightweight plug-in
scalability of our GraphPro when integrated with various state-of-the-art
recommenders, emphasizing the advantages of GraphPro in terms of effectiveness,
robustness and efficiency. The implementation details and source code of our
GraphPro are available in the repository at https://github.com/HKUDS/GraphPro
Related papers
- GPT4Rec: Graph Prompt Tuning for Streaming Recommendation [30.604441550735494]
We present GPT4Rec, a Graph Prompt Tuning method for streaming Recommendation.
In particular, GPT4Rec disentangles the graph patterns into multiple views.
It guides the model across varying interaction patterns within the user-item graph.
arXiv Detail & Related papers (2024-06-12T13:59:31Z) - Gradient Transformation: Towards Efficient and Model-Agnostic Unlearning for Dynamic Graph Neural Networks [66.70786325911124]
Graph unlearning has emerged as an essential tool for safeguarding user privacy and mitigating the negative impacts of undesirable data.
With the increasing prevalence of DGNNs, it becomes imperative to investigate the implementation of dynamic graph unlearning.
We propose an effective, efficient, model-agnostic, and post-processing method to implement DGNN unlearning.
arXiv Detail & Related papers (2024-05-23T10:26:18Z) - TimeGraphs: Graph-based Temporal Reasoning [64.18083371645956]
TimeGraphs is a novel approach that characterizes dynamic interactions as a hierarchical temporal graph.
Our approach models the interactions using a compact graph-based representation, enabling adaptive reasoning across diverse time scales.
We evaluate TimeGraphs on multiple datasets with complex, dynamic agent interactions, including a football simulator, the Resistance game, and the MOMA human activity dataset.
arXiv Detail & Related papers (2024-01-06T06:26:49Z) - HetGPT: Harnessing the Power of Prompt Tuning in Pre-Trained
Heterogeneous Graph Neural Networks [24.435068514392487]
HetGPT is a post-training prompting framework for graph neural networks.
It improves the performance of state-of-the-art HGNNs on semi-supervised node classification.
arXiv Detail & Related papers (2023-10-23T19:35:57Z) - EasyDGL: Encode, Train and Interpret for Continuous-time Dynamic Graph Learning [92.71579608528907]
This paper aims to design an easy-to-use pipeline (termed as EasyDGL) composed of three key modules with both strong ability fitting and interpretability.
EasyDGL can effectively quantify the predictive power of frequency content that a model learn from the evolving graph data.
arXiv Detail & Related papers (2023-03-22T06:35:08Z) - Learning Dual Dynamic Representations on Time-Sliced User-Item
Interaction Graphs for Sequential Recommendation [62.30552176649873]
We devise a novel Dynamic Representation Learning model for Sequential Recommendation (DRL-SRe)
To better model the user-item interactions for characterizing the dynamics from both sides, the proposed model builds a global user-item interaction graph for each time slice.
To enable the model to capture fine-grained temporal information, we propose an auxiliary temporal prediction task over consecutive time slices.
arXiv Detail & Related papers (2021-09-24T07:44:27Z) - Graph Contrastive Learning Automated [94.41860307845812]
Graph contrastive learning (GraphCL) has emerged with promising representation learning performance.
The effectiveness of GraphCL hinges on ad-hoc data augmentations, which have to be manually picked per dataset.
This paper proposes a unified bi-level optimization framework to automatically, adaptively and dynamically select data augmentations when performing GraphCL on specific graph data.
arXiv Detail & Related papers (2021-06-10T16:35:27Z) - TCL: Transformer-based Dynamic Graph Modelling via Contrastive Learning [87.38675639186405]
We propose a novel graph neural network approach, called TCL, which deals with the dynamically-evolving graph in a continuous-time fashion.
To the best of our knowledge, this is the first attempt to apply contrastive learning to representation learning on dynamic graphs.
arXiv Detail & Related papers (2021-05-17T15:33:25Z) - GraphSAIL: Graph Structure Aware Incremental Learning for Recommender
Systems [47.51104205511256]
We develop a Graph Structure Aware Incremental Learning framework, GraphSAIL, to address the commonly experienced catastrophic forgetting problem.
Our approach preserves a user's long-term preference (or an item's long-term property) during incremental model updating.
arXiv Detail & Related papers (2020-08-25T04:33:59Z) - Temporal Graph Networks for Deep Learning on Dynamic Graphs [4.5158585619109495]
We present Temporal Graph Networks (TGNs), a generic, efficient framework for deep learning on dynamic graphs represented as sequences of timed events.
Thanks to a novel combination of memory modules and graph-based operators, TGNs are able to significantly outperform previous approaches being at the same time more computationally efficient.
arXiv Detail & Related papers (2020-06-18T16:06:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.