InfoDCL: Informative Noise Enhanced Diffusion Based Contrastive Learning
- URL: http://arxiv.org/abs/2512.16576v1
- Date: Thu, 18 Dec 2025 14:15:31 GMT
- Title: InfoDCL: Informative Noise Enhanced Diffusion Based Contrastive Learning
- Authors: Xufeng Liang, Zhida Qin, Chong Zhang, Tianyu Huang, Gangyi Ding,
- Abstract summary: We propose a novel diffusion-based contrastive learning framework for recommendation.<n>We employ a single-step diffusion process that integrates noise with auxiliary semantic information to generate signals.<n>Experiments on five real-world datasets demonstrate that InfoDCL significantly outperforms state-of-the-art methods.
- Score: 14.525824265656558
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Contrastive learning has demonstrated promising potential in recommender systems. Existing methods typically construct sparser views by randomly perturbing the original interaction graph, as they have no idea about the authentic user preferences. Owing to the sparse nature of recommendation data, this paradigm can only capture insufficient semantic information. To address the issue, we propose InfoDCL, a novel diffusion-based contrastive learning framework for recommendation. Rather than injecting randomly sampled Gaussian noise, we employ a single-step diffusion process that integrates noise with auxiliary semantic information to generate signals and feed them to the standard diffusion process to generate authentic user preferences as contrastive views. Besides, based on a comprehensive analysis of the mutual influence between generation and preference learning in InfoDCL, we build a collaborative training objective strategy to transform the interference between them into mutual collaboration. Additionally, we employ multiple GCN layers only during inference stage to incorporate higher-order co-occurrence information while maintaining training efficiency. Extensive experiments on five real-world datasets demonstrate that InfoDCL significantly outperforms state-of-the-art methods. Our InfoDCL offers an effective solution for enhancing recommendation performance and suggests a novel paradigm for applying diffusion method in contrastive learning frameworks.
Related papers
- Continuous-time Discrete-space Diffusion Model for Recommendation [25.432419904462694]
CDRec is a novel Continuous-time Discrete-space Diffusion Recommendation framework.<n>It is superior in both recommendation accuracy and computational efficiency.<n>Experiments on real-world datasets demonstrate CDRec's superior performance in both recommendation accuracy and computational efficiency.
arXiv Detail & Related papers (2025-11-15T09:06:57Z) - Diffusion-Augmented Contrastive Learning: A Noise-Robust Encoder for Biosignal Representations [0.4061135251278187]
We propose a novel hybrid framework, Diffusion-Augmented Contrastive Learning (DACL), that fuses concepts from diffusion models and supervised contrastive learning.<n>It operates on a latent space created by a lightweight Variational Autoencoder (VAE) trained on our novel Scattering Transformer (ST) features.<n>A U-Net style encoder is then trained with a supervised contrastive objective to learn a representation that balances class discrimination with robustness to noise across various diffusion time steps.
arXiv Detail & Related papers (2025-09-24T12:15:35Z) - Unveiling Contrastive Learning's Capability of Neighborhood Aggregation for Collaborative Filtering [16.02820746003461]
graph contrastive learning (GCL) has gradually become a dominant approach in recommender systems.<n>In this paper, we reveal via theoretical derivation that the gradient descent process of the CL objective is formally equivalent to graph convolution.<n>We propose a novel neighborhood aggregation objective to bring users closer to all interacted items while pushing them away from other positive pairs.
arXiv Detail & Related papers (2025-04-14T11:22:41Z) - Diffusion-augmented Graph Contrastive Learning for Collaborative Filter [5.6604917723826365]
Graph-based collaborative filtering has been established as a prominent approach in recommendation systems.<n>Recent advances in Graph Contrastive Learning have demonstrated promising potential to alleviate data sparsity issues.<n>We propose Diffusion-augmented Contrastive Learning (DGCL) for enhanced collaborative filtering.
arXiv Detail & Related papers (2025-03-20T16:15:20Z) - Exemplar-condensed Federated Class-incremental Learning [9.970891140174658]
We propose Exemplar-Condensed federated class-incremental learning (ECoral) to distil the training characteristics of real images from streaming data into informative rehearsal exemplars.
arXiv Detail & Related papers (2024-12-25T15:13:40Z) - Dual Conditional Diffusion Models for Sequential Recommendation [63.82152785755723]
We propose Dual Conditional Diffusion Models for Sequential Recommendation (DCRec)<n>DCRec integrates implicit and explicit information by embedding dual conditions into both the forward and reverse diffusion processes.<n>This allows the model to retain valuable sequential and contextual information while leveraging explicit user-item interactions to guide the recommendation process.
arXiv Detail & Related papers (2024-10-29T11:51:06Z) - Enhancing Graph Contrastive Learning with Reliable and Informative Augmentation for Recommendation [84.45144851024257]
We propose a novel framework that aims to enhance graph contrastive learning by constructing contrastive views with stronger collaborative information via discrete codes.<n>The core idea is to map users and items into discrete codes rich in collaborative information for reliable and informative contrastive view generation.
arXiv Detail & Related papers (2024-09-09T14:04:17Z) - Disentangled Noisy Correspondence Learning [56.06801962154915]
Cross-modal retrieval is crucial in understanding latent correspondences across modalities.
DisNCL is a novel information-theoretic framework for feature Disentanglement in Noisy Correspondence Learning.
arXiv Detail & Related papers (2024-08-10T09:49:55Z) - Intent Contrastive Learning for Sequential Recommendation [86.54439927038968]
We introduce a latent variable to represent users' intents and learn the distribution function of the latent variable via clustering.
We propose to leverage the learned intents into SR models via contrastive SSL, which maximizes the agreement between a view of sequence and its corresponding intent.
Experiments conducted on four real-world datasets demonstrate the superiority of the proposed learning paradigm.
arXiv Detail & Related papers (2022-02-05T09:24:13Z) - Learning Bias-Invariant Representation by Cross-Sample Mutual
Information Minimization [77.8735802150511]
We propose a cross-sample adversarial debiasing (CSAD) method to remove the bias information misused by the target task.
The correlation measurement plays a critical role in adversarial debiasing and is conducted by a cross-sample neural mutual information estimator.
We conduct thorough experiments on publicly available datasets to validate the advantages of the proposed method over state-of-the-art approaches.
arXiv Detail & Related papers (2021-08-11T21:17:02Z) - DEALIO: Data-Efficient Adversarial Learning for Imitation from
Observation [57.358212277226315]
In imitation learning from observation IfO, a learning agent seeks to imitate a demonstrating agent using only observations of the demonstrated behavior without access to the control signals generated by the demonstrator.
Recent methods based on adversarial imitation learning have led to state-of-the-art performance on IfO problems, but they typically suffer from high sample complexity due to a reliance on data-inefficient, model-free reinforcement learning algorithms.
This issue makes them impractical to deploy in real-world settings, where gathering samples can incur high costs in terms of time, energy, and risk.
We propose a more data-efficient IfO algorithm
arXiv Detail & Related papers (2021-03-31T23:46:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.