TDCGL: Two-Level Debiased Contrastive Graph Learning for Recommendation
- URL: http://arxiv.org/abs/2310.00569v1
- Date: Sun, 1 Oct 2023 03:56:38 GMT
- Title: TDCGL: Two-Level Debiased Contrastive Graph Learning for Recommendation
- Authors: Yubo Gao, Haotian Wu
- Abstract summary: Long-tailed distribution of entities of KG and noise issues in the real world make item-entity dependent relations deviate from reflecting true characteristics.
We design the Two-Level Debiased Contrastive Learning (TDCL) and deploy it in the knowledge graph.
Considerable experiments on open-source datasets demonstrate that our method has excellent anti-noise capability.
- Score: 1.5836776102398225
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: knowledge graph-based recommendation methods have achieved great success in
the field of recommender systems. However, over-reliance on high-quality
knowledge graphs is a bottleneck for such methods. Specifically, the
long-tailed distribution of entities of KG and noise issues in the real world
will make item-entity dependent relations deviate from reflecting true
characteristics and significantly harm the performance of modeling user
preference. Contrastive learning, as a novel method that is employed for data
augmentation and denoising, provides inspiration to fill this research gap.
However, the mainstream work only focuses on the long-tail properties of the
number of items clicked, while ignoring that the long-tail properties of total
number of clicks per user may also affect the performance of the recommendation
model. Therefore, to tackle these problems, motivated by the Debiased
Contrastive Learning of Unsupervised Sentence Representations (DCLR), we
propose Two-Level Debiased Contrastive Graph Learning (TDCGL) model.
Specifically, we design the Two-Level Debiased Contrastive Learning (TDCL) and
deploy it in the KG, which is conducted not only on User-Item pairs but also on
User-User pairs for modeling higher-order relations. Also, to reduce the bias
caused by random sampling in contrastive learning, with the exception of the
negative samples obtained by random sampling, we add a noise-based generation
of negation to ensure spatial uniformity. Considerable experiments on
open-source datasets demonstrate that our method has excellent anti-noise
capability and significantly outperforms state-of-the-art baselines. In
addition, ablation studies about the necessity for each level of TDCL are
conducted.
Related papers
- Data Augmentation of Contrastive Learning is Estimating Positive-incentive Noise [54.24688963649581]
We scientifically investigate the connection between contrastive learning and $pi$-noise.
Inspired by the idea of Positive-incentive Noise (Pi-Noise or $pi$-Noise) that aims at learning the reliable noise beneficial to tasks, we develop a $pi$-noise generator.
arXiv Detail & Related papers (2024-08-19T12:07:42Z) - Dual-Channel Latent Factor Analysis Enhanced Graph Contrastive Learning for Recommendation [2.9449497738046078]
Graph Neural Networks (GNNs) are powerful learning methods for recommender systems.
Recently, the integration of contrastive learning with GNNs has demonstrated remarkable performance in recommender systems.
This study proposes a latent factor analysis (LFA) enhanced GCL approach, named LFA-GCL.
arXiv Detail & Related papers (2024-08-09T03:24:48Z) - Neighborhood-Enhanced Supervised Contrastive Learning for Collaborative
Filtering [23.584619027605203]
Collaborative filtering (CF) techniques face the challenge of data sparsity.
We develop two unique supervised contrastive loss functions that effectively combine supervision signals with contrastive loss.
Using the graph-based collaborative filtering model as our backbone, we effectively enhance the performance of the recommendation model.
arXiv Detail & Related papers (2024-02-18T09:46:51Z) - Prototypical Contrastive Learning through Alignment and Uniformity for
Recommendation [6.790779112538357]
We present underlinePrototypical contrastive learning through underlineAlignment and underlineUniformity for recommendation.
Specifically, we first propose prototypes as a latent space to ensure consistency across different augmentations from the origin graph.
The absence of explicit negatives means that directly optimizing the consistency loss between instance and prototype could easily result in dimensional collapse issues.
arXiv Detail & Related papers (2024-02-03T08:19:26Z) - Delving into Identify-Emphasize Paradigm for Combating Unknown Bias [52.76758938921129]
We propose an effective bias-conflicting scoring method (ECS) to boost the identification accuracy.
We also propose gradient alignment (GA) to balance the contributions of the mined bias-aligned and bias-conflicting samples.
Experiments are conducted on multiple datasets in various settings, demonstrating that the proposed solution can mitigate the impact of unknown biases.
arXiv Detail & Related papers (2023-02-22T14:50:24Z) - LightGCL: Simple Yet Effective Graph Contrastive Learning for
Recommendation [9.181689366185038]
Graph neural clustering network (GNN) is a powerful learning approach for graph-based recommender systems.
In this paper, we propose a simple yet effective graph contrastive learning paradigm LightGCL.
arXiv Detail & Related papers (2023-02-16T10:16:21Z) - Feature-Level Debiased Natural Language Understanding [86.8751772146264]
Existing natural language understanding (NLU) models often rely on dataset biases to achieve high performance on specific datasets.
We propose debiasing contrastive learning (DCT) to mitigate biased latent features and neglect the dynamic nature of bias.
DCT outperforms state-of-the-art baselines on out-of-distribution datasets while maintaining in-distribution performance.
arXiv Detail & Related papers (2022-12-11T06:16:14Z) - WSLRec: Weakly Supervised Learning for Neural Sequential Recommendation
Models [24.455665093145818]
We propose a novel model-agnostic training approach called WSLRec, which adopts a three-stage framework: pre-training, top-$k$ mining, intrinsic and fine-tuning.
WSLRec resolves the incompleteness problem by pre-training models on extra weak supervisions from model-free methods like BR and ItemCF, while resolving the inaccuracy problem by leveraging the top-$k$ mining to screen out reliable user-item relevance from weak supervisions for fine-tuning.
arXiv Detail & Related papers (2022-02-28T08:55:12Z) - General Greedy De-bias Learning [163.65789778416172]
We propose a General Greedy De-bias learning framework (GGD), which greedily trains the biased models and the base model like gradient descent in functional space.
GGD can learn a more robust base model under the settings of both task-specific biased models with prior knowledge and self-ensemble biased model without prior knowledge.
arXiv Detail & Related papers (2021-12-20T14:47:32Z) - Negative Data Augmentation [127.28042046152954]
We show that negative data augmentation samples provide information on the support of the data distribution.
We introduce a new GAN training objective where we use NDA as an additional source of synthetic data for the discriminator.
Empirically, models trained with our method achieve improved conditional/unconditional image generation along with improved anomaly detection capabilities.
arXiv Detail & Related papers (2021-02-09T20:28:35Z) - Reinforced Negative Sampling over Knowledge Graph for Recommendation [106.07209348727564]
We develop a new negative sampling model, Knowledge Graph Policy Network (kgPolicy), which works as a reinforcement learning agent to explore high-quality negatives.
kgPolicy navigates from the target positive interaction, adaptively receives knowledge-aware negative signals, and ultimately yields a potential negative item to train the recommender.
arXiv Detail & Related papers (2020-03-12T12:44:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.