RecDCL: Dual Contrastive Learning for Recommendation
- URL: http://arxiv.org/abs/2401.15635v2
- Date: Mon, 19 Feb 2024 03:09:40 GMT
- Title: RecDCL: Dual Contrastive Learning for Recommendation
- Authors: Dan Zhang and Yangliao Geng and Wenwen Gong and Zhongang Qi and Zhiyu
Chen and Xing Tang and Ying Shan and Yuxiao Dong and Jie Tang
- Abstract summary: We propose a dual contrastive learning recommendation framework -- RecDCL.
In RecDCL, the FCL objective is designed to eliminate redundant solutions on user-item positive pairs.
The BCL objective is utilized to generate contrastive embeddings on output vectors for enhancing the robustness of the representations.
- Score: 65.6236784430981
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Self-supervised learning (SSL) has recently achieved great success in mining
the user-item interactions for collaborative filtering. As a major paradigm,
contrastive learning (CL) based SSL helps address data sparsity in Web
platforms by contrasting the embeddings between raw and augmented data.
However, existing CL-based methods mostly focus on contrasting in a batch-wise
way, failing to exploit potential regularity in the feature dimension. This
leads to redundant solutions during the representation learning of users and
items. In this work, we investigate how to employ both batch-wise CL (BCL) and
feature-wise CL (FCL) for recommendation. We theoretically analyze the relation
between BCL and FCL, and find that combining BCL and FCL helps eliminate
redundant solutions but never misses an optimal solution. We propose a dual
contrastive learning recommendation framework -- RecDCL. In RecDCL, the FCL
objective is designed to eliminate redundant solutions on user-item positive
pairs and to optimize the uniform distributions within users and items using a
polynomial kernel for driving the representations to be orthogonal; The BCL
objective is utilized to generate contrastive embeddings on output vectors for
enhancing the robustness of the representations. Extensive experiments on four
widely-used benchmarks and one industry dataset demonstrate that RecDCL can
consistently outperform the state-of-the-art GNNs-based and SSL-based models
(with an improvement of up to 5.65\% in terms of Recall@20). The source code is
publicly available (https://github.com/THUDM/RecDCL).
Related papers
- Improved Diversity-Promoting Collaborative Metric Learning for Recommendation [127.08043409083687]
Collaborative Metric Learning (CML) has recently emerged as a popular method in recommendation systems.
This paper focuses on a challenging scenario where a user has multiple categories of interests.
We propose a novel method called textitDiversity-Promoting Collaborative Metric Learning (DPCML)
arXiv Detail & Related papers (2024-09-02T07:44:48Z) - L^2CL: Embarrassingly Simple Layer-to-Layer Contrastive Learning for Graph Collaborative Filtering [33.165094795515785]
Graph neural networks (GNNs) have recently emerged as an effective approach to model neighborhood signals in collaborative filtering.
We propose L2CL, a principled Layer-to-Layer Contrastive Learning framework that contrasts representations from different layers.
We find that L2CL, using only one-hop contrastive learning paradigm, is able to capture intrinsic semantic structures and improve the quality of node representation.
arXiv Detail & Related papers (2024-07-19T12:45:21Z) - Enhancing Adversarial Contrastive Learning via Adversarial Invariant
Regularization [59.77647907277523]
Adversarial contrastive learning (ACL) is a technique that enhances standard contrastive learning (SCL)
In this paper, we propose adversarial invariant regularization (AIR) to enforce independence from style factors.
arXiv Detail & Related papers (2023-04-30T03:12:21Z) - Efficient Adversarial Contrastive Learning via Robustness-Aware Coreset
Selection [59.77647907277523]
Adversarial contrast learning (ACL) does not require expensive data annotations but outputs a robust representation that withstands adversarial attacks.
ACL needs tremendous running time to generate the adversarial variants of all training data.
This paper proposes a robustness-aware coreset selection (RCS) method to speed up ACL.
arXiv Detail & Related papers (2023-02-08T03:20:14Z) - Supervised Contrastive Learning as Multi-Objective Optimization for
Fine-Tuning Large Pre-trained Language Models [3.759936323189417]
Supervised Contrastive Learning (SCL) has been shown to achieve excellent performance in most classification tasks.
In this work, we formulate the SCL problem as a Multi-Objective Optimization problem for the fine-tuning phase of RoBERTa language model.
arXiv Detail & Related papers (2022-09-28T15:13:58Z) - Decoupled Adversarial Contrastive Learning for Self-supervised
Adversarial Robustness [69.39073806630583]
Adversarial training (AT) for robust representation learning and self-supervised learning (SSL) for unsupervised representation learning are two active research fields.
We propose a two-stage framework termed Decoupled Adversarial Contrastive Learning (DeACL)
arXiv Detail & Related papers (2022-07-22T06:30:44Z) - Decoupled Contrastive Learning [23.25775900388382]
We identify a noticeable negative-positive-coupling (NPC) effect in the widely used cross-entropy (InfoNCE) loss.
By properly addressing the NPC effect, we reach a decoupled contrastive learning (DCL) objective function.
Our approach achieves $66.9%$ ImageNet top-1 accuracy using batch size 256 within 200 epochs pre-training, outperforming its baseline SimCLR by $5.1%$.
arXiv Detail & Related papers (2021-10-13T16:38:43Z) - Contrastive Learning with Adversarial Examples [79.39156814887133]
Contrastive learning (CL) is a popular technique for self-supervised learning (SSL) of visual representations.
This paper introduces a new family of adversarial examples for constrastive learning and using these examples to define a new adversarial training algorithm for SSL, denoted as CLAE.
arXiv Detail & Related papers (2020-10-22T20:45:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.