Conditional Denoising Diffusion for Sequential Recommendation
- URL: http://arxiv.org/abs/2304.11433v1
- Date: Sat, 22 Apr 2023 15:32:59 GMT
- Title: Conditional Denoising Diffusion for Sequential Recommendation
- Authors: Yu Wang, Zhiwei Liu, Liangwei Yang, Philip S. Yu
- Abstract summary: Two prominent generative models, Generative Adversarial Networks (GANs) and Variational AutoEncoders (VAEs)
GANs suffer from unstable optimization, while VAEs are prone to posterior collapse and over-smoothed generations.
We present a conditional denoising diffusion model, which includes a sequence encoder, a cross-attentive denoising decoder, and a step-wise diffuser.
- Score: 62.127862728308045
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative models have attracted significant interest due to their ability to
handle uncertainty by learning the inherent data distributions. However, two
prominent generative models, namely Generative Adversarial Networks (GANs) and
Variational AutoEncoders (VAEs), exhibit challenges that impede achieving
optimal performance in sequential recommendation tasks. Specifically, GANs
suffer from unstable optimization, while VAEs are prone to posterior collapse
and over-smoothed generations. The sparse and noisy nature of sequential
recommendation further exacerbates these issues. In response to these
limitations, we present a conditional denoising diffusion model, which includes
a sequence encoder, a cross-attentive denoising decoder, and a step-wise
diffuser. This approach streamlines the optimization and generation process by
dividing it into easier and tractable steps in a conditional autoregressive
manner. Furthermore, we introduce a novel optimization schema that incorporates
both cross-divergence loss and contrastive loss. This novel training schema
enables the model to generate high-quality sequence/item representations and
meanwhile precluding collapse. We conducted comprehensive experiments on four
benchmark datasets, and the superior performance achieved by our model attests
to its efficacy.
Related papers
- Diffusion Augmentation for Sequential Recommendation [47.43402785097255]
We propose a Diffusion Augmentation for Sequential Recommendation (DiffuASR) for a higher quality generation.
The augmented dataset by DiffuASR can be used to train the sequential recommendation models directly, free from complex training procedures.
We conduct extensive experiments on three real-world datasets with three sequential recommendation models.
arXiv Detail & Related papers (2023-09-22T13:31:34Z) - Steerable Conditional Diffusion for Out-of-Distribution Adaptation in
Imaging Inverse Problems [78.76955228709241]
We introduce a novel sampling framework called Steerable Conditional Diffusion.
This framework adapts the denoising network specifically to the available measured data.
We achieve substantial enhancements in OOD performance across diverse imaging modalities.
arXiv Detail & Related papers (2023-08-28T08:47:06Z) - Contrast-augmented Diffusion Model with Fine-grained Sequence Alignment
for Markup-to-Image Generation [15.411325887412413]
This paper proposes a novel model named "Contrast-augmented Diffusion Model with Fine-grained Sequence Alignment" (FSA-CDM)
FSA-CDM introduces contrastive positive/negative samples into the diffusion model to boost performance for markup-to-image generation.
Experiments are conducted on four benchmark datasets from different domains.
arXiv Detail & Related papers (2023-08-02T13:43:03Z) - Protein Design with Guided Discrete Diffusion [67.06148688398677]
A popular approach to protein design is to combine a generative model with a discriminative model for conditional sampling.
We propose diffusioN Optimized Sampling (NOS), a guidance method for discrete diffusion models.
NOS makes it possible to perform design directly in sequence space, circumventing significant limitations of structure-based methods.
arXiv Detail & Related papers (2023-05-31T16:31:24Z) - DuDGAN: Improving Class-Conditional GANs via Dual-Diffusion [2.458437232470188]
Class-conditional image generation using generative adversarial networks (GANs) has been investigated through various techniques.
We propose a novel approach for class-conditional image generation using GANs called DuDGAN, which incorporates a dual diffusion-based noise injection process.
Our method outperforms state-of-the-art conditional GAN models for image generation in terms of performance.
arXiv Detail & Related papers (2023-05-24T07:59:44Z) - DiffusionAD: Norm-guided One-step Denoising Diffusion for Anomaly
Detection [89.49600182243306]
We reformulate the reconstruction process using a diffusion model into a noise-to-norm paradigm.
We propose a rapid one-step denoising paradigm, significantly faster than the traditional iterative denoising in diffusion models.
The segmentation sub-network predicts pixel-level anomaly scores using the input image and its anomaly-free restoration.
arXiv Detail & Related papers (2023-03-15T16:14:06Z) - Differentiable Gaussianization Layers for Inverse Problems Regularized
by Deep Generative Models [0.0]
We show that latent tensors of deep generative models can fall out of the desired high-dimensional standard Gaussian distribution during inversion.
Our approach achieves state-of-the-art performance in terms of accuracy and consistency.
arXiv Detail & Related papers (2021-12-07T17:53:09Z) - Contrastive Self-supervised Sequential Recommendation with Robust
Augmentation [101.25762166231904]
Sequential Recommendationdescribes a set of techniques to model dynamic user behavior in order to predict future interactions in sequential user data.
Old and new issues remain, including data-sparsity and noisy data.
We propose Contrastive Self-Supervised Learning for sequential Recommendation (CoSeRec)
arXiv Detail & Related papers (2021-08-14T07:15:25Z) - Adversarial and Contrastive Variational Autoencoder for Sequential
Recommendation [25.37244686572865]
We propose a novel method called Adversarial and Contrastive Variational Autoencoder (ACVAE) for sequential recommendation.
We first introduce the adversarial training for sequence generation under the Adversarial Variational Bayes framework, which enables our model to generate high-quality latent variables.
Besides, when encoding the sequence, we apply a recurrent and convolutional structure to capture global and local relationships in the sequence.
arXiv Detail & Related papers (2021-03-19T09:01:14Z) - An EM Approach to Non-autoregressive Conditional Sequence Generation [49.11858479436565]
Autoregressive (AR) models have been the dominating approach to conditional sequence generation.
Non-autoregressive (NAR) models have been recently proposed to reduce the latency by generating all output tokens in parallel.
This paper proposes a new approach that jointly optimize both AR and NAR models in a unified Expectation-Maximization framework.
arXiv Detail & Related papers (2020-06-29T20:58:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.