Enhancing Recommendation with Denoising Auxiliary Task
- URL: http://arxiv.org/abs/2409.17402v1
- Date: Wed, 25 Sep 2024 22:26:29 GMT
- Title: Enhancing Recommendation with Denoising Auxiliary Task
- Authors: Pengsheng Liu, Linan Zheng, Jiale Chen, Guangfa Zhang, Yang Xu, Jinyun
Fang
- Abstract summary: Due to the arbitrariness of user behavior, the presence of noise poses a challenge to predicting their next actions in recommender systems.
We propose a novel self-supervised Auxiliary Task Joint Training (ATJT) method aimed at more accurately reweighting noisy sequences in recommender systems.
- Score: 2.819369786209738
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The historical interaction sequences of users plays a crucial role in
training recommender systems that can accurately predict user preferences.
However, due to the arbitrariness of user behavior, the presence of noise in
these sequences poses a challenge to predicting their next actions in
recommender systems. To address this issue, our motivation is based on the
observation that training noisy sequences and clean sequences (sequences
without noise) with equal weights can impact the performance of the model. We
propose a novel self-supervised Auxiliary Task Joint Training (ATJT) method
aimed at more accurately reweighting noisy sequences in recommender systems.
Specifically, we strategically select subsets from users' original sequences
and perform random replacements to generate artificially replaced noisy
sequences. Subsequently, we perform joint training on these artificially
replaced noisy sequences and the original sequences. Through effective
reweighting, we incorporate the training results of the noise recognition model
into the recommender model. We evaluate our method on three datasets using a
consistent base model. Experimental results demonstrate the effectiveness of
introducing self-supervised auxiliary task to enhance the base model's
performance.
Related papers
- Denoising Pre-Training and Customized Prompt Learning for Efficient Multi-Behavior Sequential Recommendation [69.60321475454843]
We propose DPCPL, the first pre-training and prompt-tuning paradigm tailored for Multi-Behavior Sequential Recommendation.
In the pre-training stage, we propose a novel Efficient Behavior Miner (EBM) to filter out the noise at multiple time scales.
Subsequently, we propose to tune the pre-trained model in a highly efficient manner with the proposed Customized Prompt Learning (CPL) module.
arXiv Detail & Related papers (2024-08-21T06:48:38Z) - Behavior-Dependent Linear Recurrent Units for Efficient Sequential Recommendation [18.75561256311228]
RecBLR is an Efficient Sequential Recommendation Model based on Behavior-Dependent Linear Recurrent Units.
Our model significantly enhances user behavior modeling and recommendation performance.
arXiv Detail & Related papers (2024-06-18T13:06:58Z) - Multi-Level Sequence Denoising with Cross-Signal Contrastive Learning for Sequential Recommendation [13.355017204983973]
Sequential recommender systems (SRSs) aim to suggest next item for a user based on her historical interaction sequences.
We propose a novel model named Multi-level Sequence Denoising with Cross-signal Contrastive Learning (MSDCCL) for sequential recommendation.
arXiv Detail & Related papers (2024-04-22T04:57:33Z) - Learning with Noisy Foundation Models [95.50968225050012]
This paper is the first work to comprehensively understand and analyze the nature of noise in pre-training datasets.
We propose a tuning method (NMTune) to affine the feature space to mitigate the malignant effect of noise and improve generalization.
arXiv Detail & Related papers (2024-03-11T16:22:41Z) - Conditional Denoising Diffusion for Sequential Recommendation [62.127862728308045]
Two prominent generative models, Generative Adversarial Networks (GANs) and Variational AutoEncoders (VAEs)
GANs suffer from unstable optimization, while VAEs are prone to posterior collapse and over-smoothed generations.
We present a conditional denoising diffusion model, which includes a sequence encoder, a cross-attentive denoising decoder, and a step-wise diffuser.
arXiv Detail & Related papers (2023-04-22T15:32:59Z) - Contrastive Self-supervised Sequential Recommendation with Robust
Augmentation [101.25762166231904]
Sequential Recommendationdescribes a set of techniques to model dynamic user behavior in order to predict future interactions in sequential user data.
Old and new issues remain, including data-sparsity and noisy data.
We propose Contrastive Self-Supervised Learning for sequential Recommendation (CoSeRec)
arXiv Detail & Related papers (2021-08-14T07:15:25Z) - Sequence Adaptation via Reinforcement Learning in Recommender Systems [8.909115457491522]
We propose the SAR model, which learns the sequential patterns and adjusts the sequence length of user-item interactions in a personalized manner.
In addition, we optimize a joint loss function to align the accuracy of the sequential recommendations with the expected cumulative rewards of the critic network.
Our experimental evaluation on four real-world datasets demonstrates the superiority of our proposed model over several baseline approaches.
arXiv Detail & Related papers (2021-07-31T13:56:46Z) - Generative Adversarial Reward Learning for Generalized Behavior Tendency
Inference [71.11416263370823]
We propose a generative inverse reinforcement learning for user behavioral preference modelling.
Our model can automatically learn the rewards from user's actions based on discriminative actor-critic network and Wasserstein GAN.
arXiv Detail & Related papers (2021-05-03T13:14:25Z) - S^3-Rec: Self-Supervised Learning for Sequential Recommendation with
Mutual Information Maximization [104.87483578308526]
We propose the model S3-Rec, which stands for Self-Supervised learning for Sequential Recommendation.
For our task, we devise four auxiliary self-supervised objectives to learn the correlations among attribute, item, subsequence, and sequence.
Extensive experiments conducted on six real-world datasets demonstrate the superiority of our proposed method over existing state-of-the-art methods.
arXiv Detail & Related papers (2020-08-18T11:44:10Z) - NAT: Noise-Aware Training for Robust Neural Sequence Labeling [30.91638109413785]
We propose two Noise-Aware Training (NAT) objectives that improve robustness of sequence labeling performed on input.
Our data augmentation method trains a neural model using a mixture of clean and noisy samples, whereas our stability training algorithm encourages the model to create a noise-invariant latent representation.
Experiments on English and German named entity recognition benchmarks confirmed that NAT consistently improved robustness of popular sequence labeling models.
arXiv Detail & Related papers (2020-05-14T17:30:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.