Diffusion Recommender Model
- URL: http://arxiv.org/abs/2304.04971v3
- Date: Wed, 25 Jun 2025 13:38:45 GMT
- Title: Diffusion Recommender Model
- Authors: Wenjie Wang, Yiyan Xu, Fuli Feng, Xinyu Lin, Xiangnan He, Tat-Seng Chua,
- Abstract summary: We propose a novel Diffusion Recommender Model (named DiffRec) to learn the generative process in a denoising manner.<n>To retain personalized information in user interactions, DiffRec reduces the added noises and avoids corrupting users' interactions into pure noises like in image synthesis.
- Score: 85.9640416600725
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative models such as Generative Adversarial Networks (GANs) and Variational Auto-Encoders (VAEs) are widely utilized to model the generative process of user interactions. However, these generative models suffer from intrinsic limitations such as the instability of GANs and the restricted representation ability of VAEs. Such limitations hinder the accurate modeling of the complex user interaction generation procedure, such as noisy interactions caused by various interference factors. In light of the impressive advantages of Diffusion Models (DMs) over traditional generative models in image synthesis, we propose a novel Diffusion Recommender Model (named DiffRec) to learn the generative process in a denoising manner. To retain personalized information in user interactions, DiffRec reduces the added noises and avoids corrupting users' interactions into pure noises like in image synthesis. In addition, we extend traditional DMs to tackle the unique challenges in practical recommender systems: high resource costs for large-scale item prediction and temporal shifts of user preference. To this end, we propose two extensions of DiffRec: L-DiffRec clusters items for dimension compression and conducts the diffusion processes in the latent space; and T-DiffRec reweights user interactions based on the interaction timestamps to encode temporal information. We conduct extensive experiments on three datasets under multiple settings (e.g. clean training, noisy training, and temporal training). The empirical results and in-depth analysis validate the superiority of DiffRec with two extensions over competitive baselines.
Related papers
- CoDiff: Conditional Diffusion Model for Collaborative 3D Object Detection [9.28605575548509]
Collaborative 3D object detection holds significant importance in the field of autonomous driving.<n>Due to pose estimation errors and time delays, the fusion of information across agents often results in feature representations with spatial and temporal noise.<n>We propose CoDiff, a novel robust collaborative perception framework.
arXiv Detail & Related papers (2025-02-17T03:20:52Z) - Collaborative Diffusion Model for Recommender System [52.56609747408617]
We present a Collaborative Diffusion model for Recommender System (CDiff4Rec)<n>CDiff4Rec generates pseudo-users from item features and leverages collaborative signals from both real and pseudo personalized neighbors.<n> Experimental results on three public datasets show that CDiff4Rec outperforms competitors by effectively mitigating the loss of personalized information.
arXiv Detail & Related papers (2025-01-31T10:05:01Z) - Breaking Determinism: Fuzzy Modeling of Sequential Recommendation Using Discrete State Space Diffusion Model [66.91323540178739]
Sequential recommendation (SR) aims to predict items that users may be interested in based on their historical behavior.
We revisit SR from a novel information-theoretic perspective and find that sequential modeling methods fail to adequately capture randomness and unpredictability of user behavior.
Inspired by fuzzy information processing theory, this paper introduces the fuzzy sets of interaction sequences to overcome the limitations and better capture the evolution of users' real interests.
arXiv Detail & Related papers (2024-10-31T14:52:01Z) - Dual Conditional Diffusion Models for Sequential Recommendation [63.82152785755723]
We propose Dual Conditional Diffusion Models for Sequential Recommendation (DCRec)<n>DCRec integrates implicit and explicit information by embedding dual conditions into both the forward and reverse diffusion processes.<n>This allows the model to retain valuable sequential and contextual information while leveraging explicit user-item interactions to guide the recommendation process.
arXiv Detail & Related papers (2024-10-29T11:51:06Z) - Collaborative Filtering Based on Diffusion Models: Unveiling the Potential of High-Order Connectivity [10.683635786183894]
CF-Diff is a new diffusion model-based collaborative filtering method.
It is capable of making full use of collaborative signals along with multi-hop neighbors.
It achieves remarkable gains up to 7.29% compared to the best competitor.
arXiv Detail & Related papers (2024-04-22T14:49:46Z) - Diffusion Augmentation for Sequential Recommendation [47.43402785097255]
We propose a Diffusion Augmentation for Sequential Recommendation (DiffuASR) for a higher quality generation.
The augmented dataset by DiffuASR can be used to train the sequential recommendation models directly, free from complex training procedures.
We conduct extensive experiments on three real-world datasets with three sequential recommendation models.
arXiv Detail & Related papers (2023-09-22T13:31:34Z) - Conditional Denoising Diffusion for Sequential Recommendation [62.127862728308045]
Two prominent generative models, Generative Adversarial Networks (GANs) and Variational AutoEncoders (VAEs)
GANs suffer from unstable optimization, while VAEs are prone to posterior collapse and over-smoothed generations.
We present a conditional denoising diffusion model, which includes a sequence encoder, a cross-attentive denoising decoder, and a step-wise diffuser.
arXiv Detail & Related papers (2023-04-22T15:32:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.