Neighborhood-Enhanced Supervised Contrastive Learning for Collaborative
Filtering
- URL: http://arxiv.org/abs/2402.11523v1
- Date: Sun, 18 Feb 2024 09:46:51 GMT
- Title: Neighborhood-Enhanced Supervised Contrastive Learning for Collaborative
Filtering
- Authors: Peijie Sun, Le Wu, Kun Zhang, Xiangzhi Chen, and Meng Wang
- Abstract summary: Collaborative filtering (CF) techniques face the challenge of data sparsity.
We develop two unique supervised contrastive loss functions that effectively combine supervision signals with contrastive loss.
Using the graph-based collaborative filtering model as our backbone, we effectively enhance the performance of the recommendation model.
- Score: 23.584619027605203
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: While effective in recommendation tasks, collaborative filtering (CF)
techniques face the challenge of data sparsity. Researchers have begun
leveraging contrastive learning to introduce additional self-supervised signals
to address this. However, this approach often unintentionally distances the
target user/item from their collaborative neighbors, limiting its efficacy. In
response, we propose a solution that treats the collaborative neighbors of the
anchor node as positive samples within the final objective loss function. This
paper focuses on developing two unique supervised contrastive loss functions
that effectively combine supervision signals with contrastive loss. We analyze
our proposed loss functions through the gradient lens, demonstrating that
different positive samples simultaneously influence updating the anchor node's
embeddings. These samples' impact depends on their similarities to the anchor
node and the negative samples. Using the graph-based collaborative filtering
model as our backbone and following the same data augmentation methods as the
existing contrastive learning model SGL, we effectively enhance the performance
of the recommendation model. Our proposed Neighborhood-Enhanced Supervised
Contrastive Loss (NESCL) model substitutes the contrastive loss function in SGL
with our novel loss function, showing marked performance improvement. On three
real-world datasets, Yelp2018, Gowalla, and Amazon-Book, our model surpasses
the original SGL by 10.09%, 7.09%, and 35.36% on NDCG@20, respectively.
Related papers
- Mixed Supervised Graph Contrastive Learning for Recommendation [34.93725892725111]
We propose Mixed Supervised Graph Contrastive Learning for Recommendation (MixSGCL) to address these concerns.
Experiments on three real-world datasets demonstrate that MixSGCL surpasses state-of-the-art methods, achieving top performance on both accuracy and efficiency.
arXiv Detail & Related papers (2024-04-24T16:19:11Z) - Prototypical Contrastive Learning through Alignment and Uniformity for
Recommendation [6.790779112538357]
We present underlinePrototypical contrastive learning through underlineAlignment and underlineUniformity for recommendation.
Specifically, we first propose prototypes as a latent space to ensure consistency across different augmentations from the origin graph.
The absence of explicit negatives means that directly optimizing the consistency loss between instance and prototype could easily result in dimensional collapse issues.
arXiv Detail & Related papers (2024-02-03T08:19:26Z) - TDCGL: Two-Level Debiased Contrastive Graph Learning for Recommendation [1.5836776102398225]
Long-tailed distribution of entities of KG and noise issues in the real world make item-entity dependent relations deviate from reflecting true characteristics.
We design the Two-Level Debiased Contrastive Learning (TDCL) and deploy it in the knowledge graph.
Considerable experiments on open-source datasets demonstrate that our method has excellent anti-noise capability.
arXiv Detail & Related papers (2023-10-01T03:56:38Z) - Challenging the Myth of Graph Collaborative Filtering: a Reasoned and Reproducibility-driven Analysis [50.972595036856035]
We present a code that successfully replicates results from six popular and recent graph recommendation models.
We compare these graph models with traditional collaborative filtering models that historically performed well in offline evaluations.
By investigating the information flow from users' neighborhoods, we aim to identify which models are influenced by intrinsic features in the dataset structure.
arXiv Detail & Related papers (2023-08-01T09:31:44Z) - Resisting Graph Adversarial Attack via Cooperative Homophilous
Augmentation [60.50994154879244]
Recent studies show that Graph Neural Networks are vulnerable and easily fooled by small perturbations.
In this work, we focus on the emerging but critical attack, namely, Graph Injection Attack.
We propose a general defense framework CHAGNN against GIA through cooperative homophilous augmentation of graph data and model.
arXiv Detail & Related papers (2022-11-15T11:44:31Z) - Positive-Negative Equal Contrastive Loss for Semantic Segmentation [8.664491798389662]
Previous works commonly design plug-and-play modules and structural losses to effectively extract and aggregate the global context.
We propose Positive-Negative Equal contrastive loss (PNE loss), which increases the latent impact of positive embedding on the anchor and treats the positive as well as negative sample pairs equally.
We conduct comprehensive experiments and achieve state-of-the-art performance on two benchmark datasets.
arXiv Detail & Related papers (2022-07-04T13:51:29Z) - Supervised Contrastive Learning for Recommendation [6.407166061614783]
We propose a supervised contrastive learning framework to pre-train the user-item bipartite graph, and then fine-tune the graph convolutional neural network.
We term this learning method as Supervised Contrastive Learning(SCL) and apply it on the most advanced LightGCN.
arXiv Detail & Related papers (2022-01-10T03:11:42Z) - Adversarial Feature Augmentation and Normalization for Visual
Recognition [109.6834687220478]
Recent advances in computer vision take advantage of adversarial data augmentation to ameliorate the generalization ability of classification models.
Here, we present an effective and efficient alternative that advocates adversarial augmentation on intermediate feature embeddings.
We validate the proposed approach across diverse visual recognition tasks with representative backbone networks.
arXiv Detail & Related papers (2021-03-22T20:36:34Z) - Unleashing the Power of Contrastive Self-Supervised Visual Models via
Contrast-Regularized Fine-Tuning [94.35586521144117]
We investigate whether applying contrastive learning to fine-tuning would bring further benefits.
We propose Contrast-regularized tuning (Core-tuning), a novel approach for fine-tuning contrastive self-supervised visual models.
arXiv Detail & Related papers (2021-02-12T16:31:24Z) - Negative Data Augmentation [127.28042046152954]
We show that negative data augmentation samples provide information on the support of the data distribution.
We introduce a new GAN training objective where we use NDA as an additional source of synthetic data for the discriminator.
Empirically, models trained with our method achieve improved conditional/unconditional image generation along with improved anomaly detection capabilities.
arXiv Detail & Related papers (2021-02-09T20:28:35Z) - Self-supervised Graph Learning for Recommendation [69.98671289138694]
We explore self-supervised learning on user-item graph for recommendation.
An auxiliary self-supervised task reinforces node representation learning via self-discrimination.
Empirical studies on three benchmark datasets demonstrate the effectiveness of SGL.
arXiv Detail & Related papers (2020-10-21T06:35:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.