TwinCL: A Twin Graph Contrastive Learning Model for Collaborative Filtering
- URL: http://arxiv.org/abs/2409.19169v1
- Date: Fri, 27 Sep 2024 22:31:08 GMT
- Title: TwinCL: A Twin Graph Contrastive Learning Model for Collaborative Filtering
- Authors: Chengkai Liu, Jianling Wang, James Caverlee,
- Abstract summary: We introduce a twin encoder in place of random augmentations, demonstrating the redundancy of traditional augmentation techniques.
Our proposed Twin Graph Contrastive Learning model -- TwinCL -- aligns positive pairs of user and item embeddings and the representations from the twin encoder.
Our theoretical analysis and experimental results show that the proposed model contributes to better recommendation accuracy and training efficiency performance.
- Score: 20.26347686022996
- License:
- Abstract: In the domain of recommendation and collaborative filtering, Graph Contrastive Learning (GCL) has become an influential approach. Nevertheless, the reasons for the effectiveness of contrastive learning are still not well understood. In this paper, we challenge the conventional use of random augmentations on graph structure or embedding space in GCL, which may disrupt the structural and semantic information inherent in Graph Neural Networks. Moreover, fixed-rate data augmentation proves to be less effective compared to augmentation with an adaptive rate. In the initial training phases, significant perturbations are more suitable, while as the training approaches convergence, milder perturbations yield better results. We introduce a twin encoder in place of random augmentations, demonstrating the redundancy of traditional augmentation techniques. The twin encoder updating mechanism ensures the generation of more diverse contrastive views in the early stages, transitioning to views with greater similarity as training progresses. In addition, we investigate the learned representations from the perspective of alignment and uniformity on a hypersphere to optimize more efficiently. Our proposed Twin Graph Contrastive Learning model -- TwinCL -- aligns positive pairs of user and item embeddings and the representations from the twin encoder while maintaining the uniformity of the embeddings on the hypersphere. Our theoretical analysis and experimental results show that the proposed model optimizing alignment and uniformity with the twin encoder contributes to better recommendation accuracy and training efficiency performance. In comprehensive experiments on three public datasets, our proposed TwinCL achieves an average improvement of 5.6% (NDCG@10) in recommendation accuracy with faster training speed, while effectively mitigating popularity bias.
Related papers
- Dual Adversarial Perturbators Generate rich Views for Recommendation [16.284670207195056]
AvoGCL emulates curriculum learning by applying adversarial training to graph structures and embedding perturbations.
Experiments on three real-world datasets demonstrate that AvoGCL significantly outperforms the state-of-the-art competitors.
arXiv Detail & Related papers (2024-08-26T15:19:35Z) - Dual-perspective Cross Contrastive Learning in Graph Transformers [33.18813968554711]
Graph contrastive learning (GCL) is a popular method for leaning graph representations.
This paper proposes a framework termed dual-perspective cross graph contrastive learning (DC-GCL)
DC-GCL incorporates three modifications designed to enhance positive sample diversity and reliability.
arXiv Detail & Related papers (2024-06-01T11:11:49Z) - Mixed Supervised Graph Contrastive Learning for Recommendation [34.93725892725111]
We propose Mixed Supervised Graph Contrastive Learning for Recommendation (MixSGCL) to address these concerns.
Experiments on three real-world datasets demonstrate that MixSGCL surpasses state-of-the-art methods, achieving top performance on both accuracy and efficiency.
arXiv Detail & Related papers (2024-04-24T16:19:11Z) - Addressing Sample Inefficiency in Multi-View Representation Learning [6.621303125642322]
Non-contrastive self-supervised learning (NC-SSL) methods have shown great promise for label-free representation learning in computer vision.
We provide theoretical insights on the implicit bias of the BarlowTwins and VICReg loss that can explain theses and guide the development of more principled recommendations.
arXiv Detail & Related papers (2023-12-17T14:14:31Z) - LightGCL: Simple Yet Effective Graph Contrastive Learning for
Recommendation [9.181689366185038]
Graph neural clustering network (GNN) is a powerful learning approach for graph-based recommender systems.
In this paper, we propose a simple yet effective graph contrastive learning paradigm LightGCL.
arXiv Detail & Related papers (2023-02-16T10:16:21Z) - Adversarial Learning Data Augmentation for Graph Contrastive Learning in
Recommendation [56.10351068286499]
We propose Learnable Data Augmentation for Graph Contrastive Learning (LDA-GCL)
Our methods include data augmentation learning and graph contrastive learning, which follow the InfoMin and InfoMax principles, respectively.
In implementation, our methods optimize the adversarial loss function to learn data augmentation and effective representations of users and items.
arXiv Detail & Related papers (2023-02-05T06:55:51Z) - GraphLearner: Graph Node Clustering with Fully Learnable Augmentation [76.63963385662426]
Contrastive deep graph clustering (CDGC) leverages the power of contrastive learning to group nodes into different clusters.
We propose a Graph Node Clustering with Fully Learnable Augmentation, termed GraphLearner.
It introduces learnable augmentors to generate high-quality and task-specific augmented samples for CDGC.
arXiv Detail & Related papers (2022-12-07T10:19:39Z) - Improving Covariance Conditioning of the SVD Meta-layer by Orthogonality [65.67315418971688]
Nearest Orthogonal Gradient (NOG) and Optimal Learning Rate (OLR) are proposed.
Experiments on visual recognition demonstrate that our methods can simultaneously improve the covariance conditioning and generalization.
arXiv Detail & Related papers (2022-07-05T15:39:29Z) - Data-heterogeneity-aware Mixing for Decentralized Learning [63.83913592085953]
We characterize the dependence of convergence on the relationship between the mixing weights of the graph and the data heterogeneity across nodes.
We propose a metric that quantifies the ability of a graph to mix the current gradients.
Motivated by our analysis, we propose an approach that periodically and efficiently optimize the metric.
arXiv Detail & Related papers (2022-04-13T15:54:35Z) - GraphCoCo: Graph Complementary Contrastive Learning [65.89743197355722]
Graph Contrastive Learning (GCL) has shown promising performance in graph representation learning (GRL) without the supervision of manual annotations.
This paper proposes an effective graph complementary contrastive learning approach named GraphCoCo to tackle the above issue.
arXiv Detail & Related papers (2022-03-24T02:58:36Z) - Stacked Hybrid-Attention and Group Collaborative Learning for Unbiased
Scene Graph Generation [62.96628432641806]
Scene Graph Generation aims to first encode the visual contents within the given image and then parse them into a compact summary graph.
We first present a novel Stacked Hybrid-Attention network, which facilitates the intra-modal refinement as well as the inter-modal interaction.
We then devise an innovative Group Collaborative Learning strategy to optimize the decoder.
arXiv Detail & Related papers (2022-03-18T09:14:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.