GPT4Rec: Graph Prompt Tuning for Streaming Recommendation
- URL: http://arxiv.org/abs/2406.08229v2
- Date: Thu, 11 Jul 2024 14:33:23 GMT
- Title: GPT4Rec: Graph Prompt Tuning for Streaming Recommendation
- Authors: Peiyan Zhang, Yuchen Yan, Xi Zhang, Liying Kang, Chaozhuo Li, Feiran Huang, Senzhang Wang, Sunghun Kim,
- Abstract summary: We present GPT4Rec, a Graph Prompt Tuning method for streaming Recommendation.
In particular, GPT4Rec disentangles the graph patterns into multiple views.
It guides the model across varying interaction patterns within the user-item graph.
- Score: 30.604441550735494
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the realm of personalized recommender systems, the challenge of adapting to evolving user preferences and the continuous influx of new users and items is paramount. Conventional models, typically reliant on a static training-test approach, struggle to keep pace with these dynamic demands. Streaming recommendation, particularly through continual graph learning, has emerged as a novel solution. However, existing methods in this area either rely on historical data replay, which is increasingly impractical due to stringent data privacy regulations; or are inability to effectively address the over-stability issue; or depend on model-isolation and expansion strategies. To tackle these difficulties, we present GPT4Rec, a Graph Prompt Tuning method for streaming Recommendation. Given the evolving user-item interaction graph, GPT4Rec first disentangles the graph patterns into multiple views. After isolating specific interaction patterns and relationships in different views, GPT4Rec utilizes lightweight graph prompts to efficiently guide the model across varying interaction patterns within the user-item graph. Firstly, node-level prompts are employed to instruct the model to adapt to changes in the attributes or properties of individual nodes within the graph. Secondly, structure-level prompts guide the model in adapting to broader patterns of connectivity and relationships within the graph. Finally, view-level prompts are innovatively designed to facilitate the aggregation of information from multiple disentangled views. These prompt designs allow GPT4Rec to synthesize a comprehensive understanding of the graph, ensuring that all vital aspects of the user-item interactions are considered and effectively integrated. Experiments on four diverse real-world datasets demonstrate the effectiveness and efficiency of our proposal.
Related papers
- Contrastive General Graph Matching with Adaptive Augmentation Sampling [5.3459881796368505]
We introduce a novel Graph-centric Contrastive framework for Graph Matching (GCGM)
GCGM capitalizes on a vast pool of graph augmentations for contrastive learning, yet without needing any side information.
Our GCGM surpasses state-of-the-art self-supervised methods across various datasets.
arXiv Detail & Related papers (2024-06-25T01:08:03Z) - When Graph Data Meets Multimodal: A New Paradigm for Graph Understanding
and Reasoning [54.84870836443311]
The paper presents a new paradigm for understanding and reasoning about graph data by integrating image encoding and multimodal technologies.
This approach enables the comprehension of graph data through an instruction-response format, utilizing GPT-4V's advanced capabilities.
The study evaluates this paradigm on various graph types, highlighting the model's strengths and weaknesses, particularly in Chinese OCR performance and complex reasoning tasks.
arXiv Detail & Related papers (2023-12-16T08:14:11Z) - Adaptive spectral graph wavelets for collaborative filtering [5.547800834335382]
Collaborative filtering is a popular approach in recommender systems, whose objective is to provide personalized item suggestions.
We introduce a spectral graph wavelet collaborative filtering framework for implicit feedback data, where users, items and their interactions are represented as a bipartite graph.
In addition to capturing the graph's local and global structures, our approach yields localization of graph signals in both spatial and spectral domains.
arXiv Detail & Related papers (2023-12-05T22:22:25Z) - GraphPro: Graph Pre-training and Prompt Learning for Recommendation [18.962982290136935]
GraphPro is a framework that incorporates parameter-efficient and dynamic graph pre-training with prompt learning.
Our framework addresses the challenge of evolving user preferences by seamlessly integrating a temporal prompt mechanism and a graph-structural prompt learning mechanism.
arXiv Detail & Related papers (2023-11-28T12:00:06Z) - APGL4SR: A Generic Framework with Adaptive and Personalized Global
Collaborative Information in Sequential Recommendation [86.29366168836141]
We propose a graph-driven framework, named Adaptive and Personalized Graph Learning for Sequential Recommendation (APGL4SR)
APGL4SR incorporates adaptive and personalized global collaborative information into sequential recommendation systems.
As a generic framework, APGL4SR can outperform other baselines with significant margins.
arXiv Detail & Related papers (2023-11-06T01:33:24Z) - Deep Prompt Tuning for Graph Transformers [55.2480439325792]
Fine-tuning is resource-intensive and requires storing multiple copies of large models.
We propose a novel approach called deep graph prompt tuning as an alternative to fine-tuning.
By freezing the pre-trained parameters and only updating the added tokens, our approach reduces the number of free parameters and eliminates the need for multiple model copies.
arXiv Detail & Related papers (2023-09-18T20:12:17Z) - Instant Representation Learning for Recommendation over Large Dynamic
Graphs [29.41179019520622]
We propose SUPA, a novel graph neural network for dynamic multiplex heterogeneous graphs.
For each new edge, SUPA samples an influenced subgraph, updates the representations of the two interactive nodes, and propagates the interaction information to the sampled subgraph.
To train SUPA incrementally online, we propose InsLearn, an efficient workflow for single-pass training of large dynamic graphs.
arXiv Detail & Related papers (2023-05-22T15:36:10Z) - A Graph-Enhanced Click Model for Web Search [67.27218481132185]
We propose a novel graph-enhanced click model (GraphCM) for web search.
We exploit both intra-session and inter-session information for the sparsity and cold-start problems.
arXiv Detail & Related papers (2022-06-17T08:32:43Z) - Enhancing Sequential Recommendation with Graph Contrastive Learning [64.05023449355036]
This paper proposes a novel sequential recommendation framework, namely Graph Contrastive Learning for Sequential Recommendation (GCL4SR)
GCL4SR employs a Weighted Item Transition Graph (WITG), built based on interaction sequences of all users, to provide global context information for each interaction and weaken the noise information in the sequence data.
Experiments on real-world datasets demonstrate that GCL4SR consistently outperforms state-of-the-art sequential recommendation methods.
arXiv Detail & Related papers (2022-05-30T03:53:31Z) - Position-enhanced and Time-aware Graph Convolutional Network for
Sequential Recommendations [3.286961611175469]
We propose a new deep learning-based sequential recommendation approach based on a Position-enhanced and Time-aware Graph Convolutional Network (PTGCN)
PTGCN models the sequential patterns and temporal dynamics between user-item interactions by defining a position-enhanced and time-aware graph convolution operation.
It realizes the high-order connectivity between users and items by stacking multi-layer graph convolutions.
arXiv Detail & Related papers (2021-07-12T07:34:20Z) - TCL: Transformer-based Dynamic Graph Modelling via Contrastive Learning [87.38675639186405]
We propose a novel graph neural network approach, called TCL, which deals with the dynamically-evolving graph in a continuous-time fashion.
To the best of our knowledge, this is the first attempt to apply contrastive learning to representation learning on dynamic graphs.
arXiv Detail & Related papers (2021-05-17T15:33:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.