Empowering Denoising Sequential Recommendation with Large Language Model Embeddings
- URL: http://arxiv.org/abs/2510.04239v1
- Date: Sun, 05 Oct 2025 15:10:51 GMT
- Title: Empowering Denoising Sequential Recommendation with Large Language Model Embeddings
- Authors: Tongzhou Wu, Yuhao Wang, Maolin Wang, Chi Zhang, Xiangyu Zhao,
- Abstract summary: Sequential recommendation aims to capture user preferences by modeling sequential patterns in user-item interactions.<n>To reduce the effect of noise, some works propose explicitly identifying and removing noisy items.<n>We propose a novel framework: Interest Alignment for Denoising Sequential Recommendation (IADSR) which integrates both collaborative and semantic information.
- Score: 18.84444501128626
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Sequential recommendation aims to capture user preferences by modeling sequential patterns in user-item interactions. However, these models are often influenced by noise such as accidental interactions, leading to suboptimal performance. Therefore, to reduce the effect of noise, some works propose explicitly identifying and removing noisy items. However, we find that simply relying on collaborative information may result in an over-denoising problem, especially for cold items. To overcome these limitations, we propose a novel framework: Interest Alignment for Denoising Sequential Recommendation (IADSR) which integrates both collaborative and semantic information. Specifically, IADSR is comprised of two stages: in the first stage, we obtain the collaborative and semantic embeddings of each item from a traditional sequential recommendation model and an LLM, respectively. In the second stage, we align the collaborative and semantic embeddings and then identify noise in the interaction sequence based on long-term and short-term interests captured in the collaborative and semantic modalities. Our extensive experiments on four public datasets validate the effectiveness of the proposed framework and its compatibility with different sequential recommendation systems.
Related papers
- Test-time Adaptive Hierarchical Co-enhanced Denoising Network for Reliable Multimodal Classification [55.56234913868664]
We propose Test-time Adaptive Hierarchical Co-enhanced Denoising Network (TAHCD) for reliable learning on multimodal data.<n>The proposed method achieves superior classification performance, robustness, and generalization compared with state-of-the-art reliable multimodal learning approaches.
arXiv Detail & Related papers (2026-01-12T03:14:12Z) - On Efficiency-Effectiveness Trade-off of Diffusion-based Recommenders [21.07658297352006]
We propose TA-Rec, a two-stage framework that achieves one-step generation by smoothing the denoising function during pretraining.<n>We also introduce Adaptive Preference Alignment (APA) that aligns the denoising process with user preference adaptively based on preference pair similarity and timesteps.<n>Experiments prove that TA-Rec's two-stage objective effectively mitigates the discretization errors-induced trade-off, enhancing both efficiency and effectiveness of diffusion-based recommenders.
arXiv Detail & Related papers (2025-10-20T07:35:12Z) - Multi-Granularity Sequence Denoising with Weakly Supervised Signal for Sequential Recommendation [11.795090187372773]
Sequential recommendation aims to predict the next item based on user interests in historical interaction sequences.<n>Existing research employs unsupervised methods that indirectly identify item-granularity irrelevant noise.<n>We propose Multi-Granularity Sequence Denoising with Weakly Supervised Signal for Sequential Recommendation.
arXiv Detail & Related papers (2025-10-12T12:10:27Z) - Multi-Modal Multi-Behavior Sequential Recommendation with Conditional Diffusion-Based Feature Denoising [1.4207530018625354]
This paper focuses on the problem of multi-modal multi-behavior sequential recommendation.<n>We propose a novel Multi-Modal Multi-Behavior Sequential Recommendation model (M$3$BSR)<n> Experimental results indicate that M$3$BSR significantly outperforms existing state-of-the-art methods on benchmark datasets.
arXiv Detail & Related papers (2025-08-07T12:58:34Z) - Enhance Vision-Language Alignment with Noise [59.2608298578913]
We investigate whether the frozen model can be fine-tuned by customized noise.<n>We propose Positive-incentive Noise (PiNI) which can fine-tune CLIP via injecting noise into both visual and text encoders.
arXiv Detail & Related papers (2024-12-14T12:58:15Z) - Long-Sequence Recommendation Models Need Decoupled Embeddings [49.410906935283585]
We identify and characterize a neglected deficiency in existing long-sequence recommendation models.<n>A single set of embeddings struggles with learning both attention and representation, leading to interference between these two processes.<n>We propose the Decoupled Attention and Representation Embeddings (DARE) model, where two distinct embedding tables are learned separately to fully decouple attention and representation.
arXiv Detail & Related papers (2024-10-03T15:45:15Z) - When SparseMoE Meets Noisy Interactions: An Ensemble View on Denoising Recommendation [3.050721435894337]
We propose a novel Adaptive Ensemble Learning (AEL) for denoising recommendation.<n>AEL employs a sparse gating network as a brain, selecting suitable experts to synthesize appropriate denoising capacities.<n>To address the ensemble learning shortcoming of model complexity, we also proposed a novel method that stacks components to create sub-recommenders.
arXiv Detail & Related papers (2024-09-19T12:55:34Z) - Denoising Pre-Training and Customized Prompt Learning for Efficient Multi-Behavior Sequential Recommendation [69.60321475454843]
We propose DPCPL, the first pre-training and prompt-tuning paradigm tailored for Multi-Behavior Sequential Recommendation.
In the pre-training stage, we propose a novel Efficient Behavior Miner (EBM) to filter out the noise at multiple time scales.
Subsequently, we propose to tune the pre-trained model in a highly efficient manner with the proposed Customized Prompt Learning (CPL) module.
arXiv Detail & Related papers (2024-08-21T06:48:38Z) - Multi-Level Sequence Denoising with Cross-Signal Contrastive Learning for Sequential Recommendation [13.355017204983973]
Sequential recommender systems (SRSs) aim to suggest next item for a user based on her historical interaction sequences.
We propose a novel model named Multi-level Sequence Denoising with Cross-signal Contrastive Learning (MSDCCL) for sequential recommendation.
arXiv Detail & Related papers (2024-04-22T04:57:33Z) - Inference and Denoise: Causal Inference-based Neural Speech Enhancement [83.4641575757706]
This study addresses the speech enhancement (SE) task within the causal inference paradigm by modeling the noise presence as an intervention.
The proposed causal inference-based speech enhancement (CISE) separates clean and noisy frames in an intervened noisy speech using a noise detector and assigns both sets of frames to two mask-based enhancement modules (EMs) to perform noise-conditional SE.
arXiv Detail & Related papers (2022-11-02T15:03:50Z) - Contrastive Self-supervised Sequential Recommendation with Robust
Augmentation [101.25762166231904]
Sequential Recommendationdescribes a set of techniques to model dynamic user behavior in order to predict future interactions in sequential user data.
Old and new issues remain, including data-sparsity and noisy data.
We propose Contrastive Self-Supervised Learning for sequential Recommendation (CoSeRec)
arXiv Detail & Related papers (2021-08-14T07:15:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.