Semantic Gaussian Mixture Variational Autoencoder for Sequential Recommendation
- URL: http://arxiv.org/abs/2502.16140v1
- Date: Sat, 22 Feb 2025 08:29:52 GMT
- Title: Semantic Gaussian Mixture Variational Autoencoder for Sequential Recommendation
- Authors: Beibei Li, Tao Xiang, Beihong Jin, Yiyuan Zheng, Rui Zhao,
- Abstract summary: We propose a novel VAE-based Sequential Recommendation model named SIGMA.<n>For multi-interest elicitation, SIGMA includes a probabilistic multi-interest extraction module.<n>Experiments on public datasets demonstrate the effectiveness of SIGMA.
- Score: 49.492451800322144
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Variational AutoEncoder (VAE) for Sequential Recommendation (SR), which learns a continuous distribution for each user-item interaction sequence rather than a determinate embedding, is robust against data deficiency and achieves significant performance. However, existing VAE-based SR models assume a unimodal Gaussian distribution as the prior distribution of sequence representations, leading to restricted capability to capture complex user interests and limiting recommendation performance when users have more than one interest. Due to that it is common for users to have multiple disparate interests, we argue that it is more reasonable to establish a multimodal prior distribution in SR scenarios instead of a unimodal one. Therefore, in this paper, we propose a novel VAE-based SR model named SIGMA. SIGMA assumes that the prior of sequence representation conforms to a Gaussian mixture distribution, where each component of the distribution semantically corresponds to one of multiple interests. For multi-interest elicitation, SIGMA includes a probabilistic multi-interest extraction module that learns a unimodal Gaussian distribution for each interest according to implicit item hyper-categories. Additionally, to incorporate the multimodal interests into sequence representation learning, SIGMA constructs a multi-interest-aware ELBO, which is compatible with the Gaussian mixture prior. Extensive experiments on public datasets demonstrate the effectiveness of SIGMA. The code is available at https://github.com/libeibei95/SIGMA.
Related papers
- Reviving Any-Subset Autoregressive Models with Principled Parallel Sampling and Speculative Decoding [55.2480439325792]
In arbitrary-order language models, it is an open question how to sample tokens in parallel from the correct joint distribution.
We find that a different class of models, any-subset autoregressive models (AS-ARMs), holds the solution.
We show that AS-ARMs achieve state-of-the-art performance among sub-200M parameter models on infilling benchmark tasks, and nearly match the performance of models 50X larger on code generation.
arXiv Detail & Related papers (2025-04-29T06:33:13Z) - Distinguished Quantized Guidance for Diffusion-based Sequence Recommendation [7.6572888950554905]
We propose Distinguished Quantized Guidance for Diffusion-based Sequence Recommendation (DiQDiff)<n>DiQDiff aims to extract robust guidance to understand user interests and generate distinguished items for personalized user interests within DMs.<n>The superior recommendation performance of DiQDiff against leading approaches demonstrates its effectiveness in sequential recommendation tasks.
arXiv Detail & Related papers (2025-01-29T14:20:42Z) - Spectrum-based Modality Representation Fusion Graph Convolutional Network for Multimodal Recommendation [7.627299398469962]
We propose a new Spectrum-based Modality Representation graph recommender.
It aims to capture both uni-modal and fusion preferences while simultaneously suppressing modality noise.
Experiments on three real-world datasets show the efficacy of our proposed model.
arXiv Detail & Related papers (2024-12-19T15:53:21Z) - GaVaMoE: Gaussian-Variational Gated Mixture of Experts for Explainable Recommendation [55.769720670731516]
GaVaMoE is a novel framework for explainable recommendation.
It generates tailored explanations for specific user types and preferences.
It exhibits robust performance in scenarios with sparse user-item interactions.
arXiv Detail & Related papers (2024-10-15T17:59:30Z) - MISSRec: Pre-training and Transferring Multi-modal Interest-aware
Sequence Representation for Recommendation [61.45986275328629]
We propose MISSRec, a multi-modal pre-training and transfer learning framework for sequential recommendation.
On the user side, we design a Transformer-based encoder-decoder model, where the contextual encoder learns to capture the sequence-level multi-modal user interests.
On the candidate item side, we adopt a dynamic fusion module to produce user-adaptive item representation.
arXiv Detail & Related papers (2023-08-22T04:06:56Z) - Exact Feature Distribution Matching for Arbitrary Style Transfer and
Domain Generalization [43.19170120544387]
We propose to perform Exact Feature Distribution Matching (EFDM) by exactly matching the empirical Cumulative Distribution Functions (eCDFs) of image features.
A fast EHM algorithm, named Sort-Matching, is employed to perform EFDM in a plug-and-play manner with minimal cost.
The effectiveness of our proposed EFDM method is verified on a variety of AST and DG tasks, demonstrating new state-of-the-art results.
arXiv Detail & Related papers (2022-03-15T09:18:14Z) - Modeling Sequences as Distributions with Uncertainty for Sequential
Recommendation [63.77513071533095]
Most existing sequential methods assume users are deterministic.
Item-item transitions might fluctuate significantly in several item aspects and exhibit randomness of user interests.
We propose a Distribution-based Transformer Sequential Recommendation (DT4SR) which injects uncertainties into sequential modeling.
arXiv Detail & Related papers (2021-06-11T04:35:21Z) - Sparse-Interest Network for Sequential Recommendation [78.83064567614656]
We propose a novel textbfSparse textbfInterest textbfNEtwork (SINE) for sequential recommendation.
Our sparse-interest module can adaptively infer a sparse set of concepts for each user from the large concept pool.
SINE can achieve substantial improvement over state-of-the-art methods.
arXiv Detail & Related papers (2021-02-18T11:03:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.