Multi-Granularity Sequence Denoising with Weakly Supervised Signal for Sequential Recommendation
- URL: http://arxiv.org/abs/2510.10564v1
- Date: Sun, 12 Oct 2025 12:10:27 GMT
- Title: Multi-Granularity Sequence Denoising with Weakly Supervised Signal for Sequential Recommendation
- Authors: Liang Li, Zhou Yang, Xiaofei Zhu,
- Abstract summary: Sequential recommendation aims to predict the next item based on user interests in historical interaction sequences.<n>Existing research employs unsupervised methods that indirectly identify item-granularity irrelevant noise.<n>We propose Multi-Granularity Sequence Denoising with Weakly Supervised Signal for Sequential Recommendation.
- Score: 11.795090187372773
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Sequential recommendation aims to predict the next item based on user interests in historical interaction sequences. Historical interaction sequences often contain irrelevant noisy items, which significantly hinders the performance of recommendation systems. Existing research employs unsupervised methods that indirectly identify item-granularity irrelevant noise by predicting the ground truth item. Since these methods lack explicit noise labels, they are prone to misidentify users' interested items as noise. Additionally, while these methods focus on removing item-granularity noise driven by the ground truth item, they overlook interest-granularity noise, limiting their ability to perform broader denoising based on user interests. To address these issues, we propose Multi-Granularity Sequence Denoising with Weakly Supervised Signal for Sequential Recommendation(MGSD-WSS). MGSD-WSS first introduces the Multiple Gaussian Kernel Perceptron module to map the original and enhance sequence into a common representation space and utilizes weakly supervised signals to accurately identify noisy items in the historical interaction sequence. Subsequently, it employs the item-granularity denoising module with noise-weighted contrastive learning to obtain denoised item representations. Then, it extracts target interest representations from the ground truth item and applies noise-weighted contrastive learning to obtain denoised interest representations. Finally, based on the denoised item and interest representations, MGSD-WSS predicts the next item. Extensive experiments on five datasets demonstrate that the proposed method significantly outperforms state-of-the-art sequence recommendation and denoising models. Our code is available at https://github.com/lalunex/MGSD-WSS.
Related papers
- FAIR: Focused Attention Is All You Need for Generative Recommendation [43.65370600297507]
We propose the first generative recommendation framework with focused attention, which enhances attention scores to relevant context while suppressing those to irrelevant ones.<n>Specifically, we propose (1) a focused attention mechanism integrated into the standard Transformer, which learns two separate sets of Q and K attention weights and computes their difference as the final attention scores.<n>We validate the effectiveness of FAIR on four public benchmarks, demonstrating its superior performance compared to existing methods.
arXiv Detail & Related papers (2025-12-12T03:25:12Z) - Empowering Denoising Sequential Recommendation with Large Language Model Embeddings [18.84444501128626]
Sequential recommendation aims to capture user preferences by modeling sequential patterns in user-item interactions.<n>To reduce the effect of noise, some works propose explicitly identifying and removing noisy items.<n>We propose a novel framework: Interest Alignment for Denoising Sequential Recommendation (IADSR) which integrates both collaborative and semantic information.
arXiv Detail & Related papers (2025-10-05T15:10:51Z) - Distinguished Quantized Guidance for Diffusion-based Sequence Recommendation [7.6572888950554905]
We propose Distinguished Quantized Guidance for Diffusion-based Sequence Recommendation (DiQDiff)<n>DiQDiff aims to extract robust guidance to understand user interests and generate distinguished items for personalized user interests within DMs.<n>The superior recommendation performance of DiQDiff against leading approaches demonstrates its effectiveness in sequential recommendation tasks.
arXiv Detail & Related papers (2025-01-29T14:20:42Z) - Multi-granularity Interest Retrieval and Refinement Network for Long-Term User Behavior Modeling in CTR Prediction [68.90783662117936]
Click-through Rate (CTR) prediction is crucial for online personalization platforms.<n>Recent advancements have shown that modeling rich user behaviors can significantly improve the performance of CTR prediction.<n>We propose Multi-granularity Interest Retrieval and Refinement Network (MIRRN)
arXiv Detail & Related papers (2024-11-22T15:29:05Z) - Enhancing Recommendation with Denoising Auxiliary Task [2.819369786209738]
Due to the arbitrariness of user behavior, the presence of noise poses a challenge to predicting their next actions in recommender systems.
We propose a novel self-supervised Auxiliary Task Joint Training (ATJT) method aimed at more accurately reweighting noisy sequences in recommender systems.
arXiv Detail & Related papers (2024-09-25T22:26:29Z) - Multi-Level Sequence Denoising with Cross-Signal Contrastive Learning for Sequential Recommendation [13.355017204983973]
Sequential recommender systems (SRSs) aim to suggest next item for a user based on her historical interaction sequences.
We propose a novel model named Multi-level Sequence Denoising with Cross-signal Contrastive Learning (MSDCCL) for sequential recommendation.
arXiv Detail & Related papers (2024-04-22T04:57:33Z) - NLIP: Noise-robust Language-Image Pre-training [95.13287735264937]
We propose a principled Noise-robust Language-Image Pre-training framework (NLIP) to stabilize pre-training via two schemes: noise-harmonization and noise-completion.
Our NLIP can alleviate the common noise effects during image-text pre-training in a more efficient way.
arXiv Detail & Related papers (2022-12-14T08:19:30Z) - Inference and Denoise: Causal Inference-based Neural Speech Enhancement [83.4641575757706]
This study addresses the speech enhancement (SE) task within the causal inference paradigm by modeling the noise presence as an intervention.
The proposed causal inference-based speech enhancement (CISE) separates clean and noisy frames in an intervened noisy speech using a noise detector and assigns both sets of frames to two mask-based enhancement modules (EMs) to perform noise-conditional SE.
arXiv Detail & Related papers (2022-11-02T15:03:50Z) - Robust Semantic Communications with Masked VQ-VAE Enabled Codebook [56.63571713657059]
We propose a framework for the robust end-to-end semantic communication systems to combat the semantic noise.
To combat the semantic noise, the adversarial training with weight is developed to incorporate the samples with semantic noise in the training dataset.
We develop a feature importance module (FIM) to suppress the noise-related and task-unrelated features.
arXiv Detail & Related papers (2022-06-08T16:58:47Z) - Robust Meta-learning with Sampling Noise and Label Noise via
Eigen-Reptile [78.1212767880785]
meta-learner is prone to overfitting since there are only a few available samples.
When handling the data with noisy labels, the meta-learner could be extremely sensitive to label noise.
We present Eigen-Reptile (ER) that updates the meta- parameters with the main direction of historical task-specific parameters.
arXiv Detail & Related papers (2022-06-04T08:48:02Z) - Sequential Recommendation via Stochastic Self-Attention [68.52192964559829]
Transformer-based approaches embed items as vectors and use dot-product self-attention to measure the relationship between items.
We propose a novel textbfSTOchastic textbfSelf-textbfAttention(STOSA) to overcome these issues.
We devise a novel Wasserstein Self-Attention module to characterize item-item position-wise relationships in sequences.
arXiv Detail & Related papers (2022-01-16T12:38:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.