An Adaptive Graph Pre-training Framework for Localized Collaborative
Filtering
- URL: http://arxiv.org/abs/2112.07191v1
- Date: Tue, 14 Dec 2021 06:53:13 GMT
- Title: An Adaptive Graph Pre-training Framework for Localized Collaborative
Filtering
- Authors: Yiqi Wang, Chaozhuo Li, Zheng Liu, Mingzheng Li, Jiliang Tang, Xing
Xie, Lei Chen, Philip S. Yu
- Abstract summary: We propose an adaptive graph pre-training framework for localized collaborative filtering (ADAPT)
ADAPT captures both the common knowledge across different graphs and the uniqueness for each graph.
It does not require transferring user/item embeddings, and is able to capture both the common knowledge across different graphs and the uniqueness for each graph.
- Score: 79.17319280791237
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph neural networks (GNNs) have been widely applied in the recommendation
tasks and have obtained very appealing performance. However, most GNN-based
recommendation methods suffer from the problem of data sparsity in practice.
Meanwhile, pre-training techniques have achieved great success in mitigating
data sparsity in various domains such as natural language processing (NLP) and
computer vision (CV). Thus, graph pre-training has the great potential to
alleviate data sparsity in GNN-based recommendations. However, pre-training
GNNs for recommendations face unique challenges. For example, user-item
interaction graphs in different recommendation tasks have distinct sets of
users and items, and they often present different properties. Therefore, the
successful mechanisms commonly used in NLP and CV to transfer knowledge from
pre-training tasks to downstream tasks such as sharing learned embeddings or
feature extractors are not directly applicable to existing GNN-based
recommendations models. To tackle these challenges, we delicately design an
adaptive graph pre-training framework for localized collaborative filtering
(ADAPT). It does not require transferring user/item embeddings, and is able to
capture both the common knowledge across different graphs and the uniqueness
for each graph. Extensive experimental results have demonstrated the
effectiveness and superiority of ADAPT.
Related papers
- GraphLoRA: Structure-Aware Contrastive Low-Rank Adaptation for Cross-Graph Transfer Learning [17.85404473268992]
Graph Neural Networks (GNNs) have demonstrated remarkable proficiency in handling a range of graph analytical tasks.
Despite their versatility, GNNs face significant challenges in transferability, limiting their utility in real-world applications.
We propose GraphLoRA, an effective and parameter-efficient method for transferring well-trained GNNs to diverse graph domains.
arXiv Detail & Related papers (2024-09-25T06:57:42Z) - DFA-GNN: Forward Learning of Graph Neural Networks by Direct Feedback Alignment [57.62885438406724]
Graph neural networks are recognized for their strong performance across various applications.
BP has limitations that challenge its biological plausibility and affect the efficiency, scalability and parallelism of training neural networks for graph-based tasks.
We propose DFA-GNN, a novel forward learning framework tailored for GNNs with a case study of semi-supervised learning.
arXiv Detail & Related papers (2024-06-04T07:24:51Z) - Efficient Heterogeneous Graph Learning via Random Projection [58.4138636866903]
Heterogeneous Graph Neural Networks (HGNNs) are powerful tools for deep learning on heterogeneous graphs.
Recent pre-computation-based HGNNs use one-time message passing to transform a heterogeneous graph into regular-shaped tensors.
We propose a hybrid pre-computation-based HGNN, named Random Projection Heterogeneous Graph Neural Network (RpHGNN)
arXiv Detail & Related papers (2023-10-23T01:25:44Z) - Label Deconvolution for Node Representation Learning on Large-scale
Attributed Graphs against Learning Bias [75.44877675117749]
We propose an efficient label regularization technique, namely Label Deconvolution (LD), to alleviate the learning bias by a novel and highly scalable approximation to the inverse mapping of GNNs.
Experiments demonstrate LD significantly outperforms state-of-the-art methods on Open Graph datasets Benchmark.
arXiv Detail & Related papers (2023-09-26T13:09:43Z) - MentorGNN: Deriving Curriculum for Pre-Training GNNs [61.97574489259085]
We propose an end-to-end model named MentorGNN that aims to supervise the pre-training process of GNNs across graphs.
We shed new light on the problem of domain adaption on relational data (i.e., graphs) by deriving a natural and interpretable upper bound on the generalization error of the pre-trained GNNs.
arXiv Detail & Related papers (2022-08-21T15:12:08Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - Graph Trend Networks for Recommendations [34.06649831739749]
The key of recommender systems is to predict how likely users will interact with items based on their historical online behaviors.
To exploit these user-item interactions, there are increasing efforts on considering the user-item interactions as a user-item bipartite graph.
Despite their success, most existing GNN-based recommender systems overlook the existence of interactions caused by unreliable behaviors.
We propose the Graph Trend Networks for recommendations (GTN) with principled designs that can capture the adaptive reliability of the interactions.
arXiv Detail & Related papers (2021-08-12T06:09:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.