Mixed Supervised Graph Contrastive Learning for Recommendation
- URL: http://arxiv.org/abs/2404.15954v2
- Date: Thu, 25 Apr 2024 19:31:38 GMT
- Title: Mixed Supervised Graph Contrastive Learning for Recommendation
- Authors: Weizhi Zhang, Liangwei Yang, Zihe Song, Henry Peng Zou, Ke Xu, Yuanjie Zhu, Philip S. Yu,
- Abstract summary: We propose Mixed Supervised Graph Contrastive Learning for Recommendation (MixSGCL) to address these concerns.
Experiments on three real-world datasets demonstrate that MixSGCL surpasses state-of-the-art methods, achieving top performance on both accuracy and efficiency.
- Score: 34.93725892725111
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recommender systems (RecSys) play a vital role in online platforms, offering users personalized suggestions amidst vast information. Graph contrastive learning aims to learn from high-order collaborative filtering signals with unsupervised augmentation on the user-item bipartite graph, which predominantly relies on the multi-task learning framework involving both the pair-wise recommendation loss and the contrastive loss. This decoupled design can cause inconsistent optimization direction from different losses, which leads to longer convergence time and even sub-optimal performance. Besides, the self-supervised contrastive loss falls short in alleviating the data sparsity issue in RecSys as it learns to differentiate users/items from different views without providing extra supervised collaborative filtering signals during augmentations. In this paper, we propose Mixed Supervised Graph Contrastive Learning for Recommendation (MixSGCL) to address these concerns. MixSGCL originally integrates the training of recommendation and unsupervised contrastive losses into a supervised contrastive learning loss to align the two tasks within one optimization direction. To cope with the data sparsity issue, instead unsupervised augmentation, we further propose node-wise and edge-wise mixup to mine more direct supervised collaborative filtering signals based on existing user-item interactions. Extensive experiments on three real-world datasets demonstrate that MixSGCL surpasses state-of-the-art methods, achieving top performance on both accuracy and efficiency. It validates the effectiveness of MixSGCL with our coupled design on supervised graph contrastive learning.
Related papers
- FairDgcl: Fairness-aware Recommendation with Dynamic Graph Contrastive Learning [48.38344934125999]
We study how to implement high-quality data augmentation to improve recommendation fairness.
Specifically, we propose FairDgcl, a dynamic graph adversarial contrastive learning framework.
We show that FairDgcl can simultaneously generate enhanced representations that possess both fairness and accuracy.
arXiv Detail & Related papers (2024-10-23T04:43:03Z) - TwinCL: A Twin Graph Contrastive Learning Model for Collaborative Filtering [20.26347686022996]
We introduce a twin encoder in place of random augmentations, demonstrating the redundancy of traditional augmentation techniques.
Our proposed Twin Graph Contrastive Learning model -- TwinCL -- aligns positive pairs of user and item embeddings and the representations from the twin encoder.
Our theoretical analysis and experimental results show that the proposed model contributes to better recommendation accuracy and training efficiency performance.
arXiv Detail & Related papers (2024-09-27T22:31:08Z) - Enhancing Graph Contrastive Learning with Reliable and Informative Augmentation for Recommendation [84.45144851024257]
CoGCL aims to enhance graph contrastive learning by constructing contrastive views with stronger collaborative information via discrete codes.
We introduce a multi-level vector quantizer in an end-to-end manner to quantize user and item representations into discrete codes.
For neighborhood structure, we propose virtual neighbor augmentation by treating discrete codes as virtual neighbors.
Regarding semantic relevance, we identify similar users/items based on shared discrete codes and interaction targets to generate the semantically relevant view.
arXiv Detail & Related papers (2024-09-09T14:04:17Z) - Dual Adversarial Perturbators Generate rich Views for Recommendation [16.284670207195056]
AvoGCL emulates curriculum learning by applying adversarial training to graph structures and embedding perturbations.
Experiments on three real-world datasets demonstrate that AvoGCL significantly outperforms the state-of-the-art competitors.
arXiv Detail & Related papers (2024-08-26T15:19:35Z) - Towards Robust Recommendation via Decision Boundary-aware Graph Contrastive Learning [25.514007761856632]
graph contrastive learning (GCL) has received increasing attention in recommender systems due to its effectiveness in reducing bias caused by data sparsity.
We argue that these methods struggle to balance between semantic invariance and view hardness across the dynamic training process.
We propose a novel GCL-based recommendation framework RGCL, which effectively maintains the semantic invariance of contrastive pairs and dynamically adapts as the model capability evolves.
arXiv Detail & Related papers (2024-07-14T13:03:35Z) - Bilateral Unsymmetrical Graph Contrastive Learning for Recommendation [12.945782054710113]
We propose a novel framework for recommendation tasks called Bilateral Unsymmetrical Graph Contrastive Learning (BusGCL)
BusGCL considers the bilateral unsymmetry on user-item node relation density for sliced user and item graph reasoning better with bilateral slicing contrastive training.
Comprehensive experiments on two public datasets have proved the superiority of BusGCL in comparison to various recommendation methods.
arXiv Detail & Related papers (2024-03-22T09:58:33Z) - Neighborhood-Enhanced Supervised Contrastive Learning for Collaborative
Filtering [23.584619027605203]
Collaborative filtering (CF) techniques face the challenge of data sparsity.
We develop two unique supervised contrastive loss functions that effectively combine supervision signals with contrastive loss.
Using the graph-based collaborative filtering model as our backbone, we effectively enhance the performance of the recommendation model.
arXiv Detail & Related papers (2024-02-18T09:46:51Z) - Adversarial Learning Data Augmentation for Graph Contrastive Learning in
Recommendation [56.10351068286499]
We propose Learnable Data Augmentation for Graph Contrastive Learning (LDA-GCL)
Our methods include data augmentation learning and graph contrastive learning, which follow the InfoMin and InfoMax principles, respectively.
In implementation, our methods optimize the adversarial loss function to learn data augmentation and effective representations of users and items.
arXiv Detail & Related papers (2023-02-05T06:55:51Z) - GraphCoCo: Graph Complementary Contrastive Learning [65.89743197355722]
Graph Contrastive Learning (GCL) has shown promising performance in graph representation learning (GRL) without the supervision of manual annotations.
This paper proposes an effective graph complementary contrastive learning approach named GraphCoCo to tackle the above issue.
arXiv Detail & Related papers (2022-03-24T02:58:36Z) - Stacked Hybrid-Attention and Group Collaborative Learning for Unbiased
Scene Graph Generation [62.96628432641806]
Scene Graph Generation aims to first encode the visual contents within the given image and then parse them into a compact summary graph.
We first present a novel Stacked Hybrid-Attention network, which facilitates the intra-modal refinement as well as the inter-modal interaction.
We then devise an innovative Group Collaborative Learning strategy to optimize the decoder.
arXiv Detail & Related papers (2022-03-18T09:14:13Z) - Self-supervised Graph Learning for Recommendation [69.98671289138694]
We explore self-supervised learning on user-item graph for recommendation.
An auxiliary self-supervised task reinforces node representation learning via self-discrimination.
Empirical studies on three benchmark datasets demonstrate the effectiveness of SGL.
arXiv Detail & Related papers (2020-10-21T06:35:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.