Recency Dropout for Recurrent Recommender Systems
- URL: http://arxiv.org/abs/2201.11016v1
- Date: Wed, 26 Jan 2022 15:50:20 GMT
- Title: Recency Dropout for Recurrent Recommender Systems
- Authors: Bo Chang, Can Xu, Matthieu L\^e, Jingchen Feng, Ya Le, Sriraj Badam,
Ed Chi, Minmin Chen
- Abstract summary: We introduce the recency dropout technique, a simple yet effective data augmentation technique to alleviate the recency bias in recommender systems.
We demonstrate the effectiveness of recency dropout in various experimental settings including a simulation study, offline experiments, as well as live experiments on a large-scale industrial recommendation platform.
- Score: 23.210278548403185
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recurrent recommender systems have been successful in capturing the temporal
dynamics in users' activity trajectories. However, recurrent neural networks
(RNNs) are known to have difficulty learning long-term dependencies. As a
consequence, RNN-based recommender systems tend to overly focus on short-term
user interests. This is referred to as the recency bias, which could negatively
affect the long-term user experience as well as the health of the ecosystem. In
this paper, we introduce the recency dropout technique, a simple yet effective
data augmentation technique to alleviate the recency bias in recurrent
recommender systems. We demonstrate the effectiveness of recency dropout in
various experimental settings including a simulation study, offline
experiments, as well as live experiments on a large-scale industrial
recommendation platform.
Related papers
- Measuring Recency Bias In Sequential Recommendation Systems [4.797371814812293]
Recency bias in a sequential recommendation system refers to the overly high emphasis placed on recent items within a user session.
This bias can diminish the serendipity of recommendations and hinder the system's ability to capture users' long-term interests.
We propose a simple yet effective novel metric specifically designed to quantify recency bias.
arXiv Detail & Related papers (2024-09-15T13:02:50Z) - Source Echo Chamber: Exploring the Escalation of Source Bias in User, Data, and Recommender System Feedback Loop [65.23044868332693]
We investigate the impact of source bias on the realm of recommender systems.
We show the prevalence of source bias and reveal a potential digital echo chamber with source bias amplification.
We introduce a black-box debiasing method that maintains model impartiality towards both HGC and AIGC.
arXiv Detail & Related papers (2024-05-28T09:34:50Z) - Ensuring User-side Fairness in Dynamic Recommender Systems [37.20838165555877]
This paper presents the first principled study on ensuring user-side fairness in dynamic recommender systems.
We propose FAir Dynamic rEcommender (FADE), an end-to-end fine-tuning framework to dynamically ensure user-side fairness over time.
We show that FADE effectively and efficiently reduces performance disparities with little sacrifice in the overall recommendation performance.
arXiv Detail & Related papers (2023-08-29T22:03:17Z) - Minimizing Control for Credit Assignment with Strong Feedback [65.59995261310529]
Current methods for gradient-based credit assignment in deep neural networks need infinitesimally small feedback signals.
We combine strong feedback influences on neural activity with gradient-based learning and show that this naturally leads to a novel view on neural network optimization.
We show that the use of strong feedback in DFC allows learning forward and feedback connections simultaneously, using a learning rule fully local in space and time.
arXiv Detail & Related papers (2022-04-14T22:06:21Z) - SURF: Semi-supervised Reward Learning with Data Augmentation for
Feedback-efficient Preference-based Reinforcement Learning [168.89470249446023]
We present SURF, a semi-supervised reward learning framework that utilizes a large amount of unlabeled samples with data augmentation.
In order to leverage unlabeled samples for reward learning, we infer pseudo-labels of the unlabeled samples based on the confidence of the preference predictor.
Our experiments demonstrate that our approach significantly improves the feedback-efficiency of the preference-based method on a variety of locomotion and robotic manipulation tasks.
arXiv Detail & Related papers (2022-03-18T16:50:38Z) - Learning from a Learning User for Optimal Recommendations [43.2268992294178]
We formalize a model to capture "learning users" and design an efficient system-side learning solution.
We prove that the regret of RAES deteriorates gracefully as the convergence rate of user learning becomes worse.
Our study provides a novel perspective on modeling the feedback loop in recommendation problems.
arXiv Detail & Related papers (2022-02-03T22:45:12Z) - Context Uncertainty in Contextual Bandits with Applications to
Recommender Systems [16.597836265345634]
We propose a new type of recurrent neural networks, dubbed recurrent exploration networks (REN), to jointly perform representation learning and effective exploration in the latent space.
Our theoretical analysis shows that REN can preserve the rate-linear suboptimal regret even when there exists uncertainty in the learned representations.
Our empirical study demonstrates that REN can achieve satisfactory long-term rewards on both synthetic and real-world recommendation datasets, outperforming state-of-the-art models.
arXiv Detail & Related papers (2022-02-01T23:23:50Z) - Supervised Advantage Actor-Critic for Recommender Systems [76.7066594130961]
We propose negative sampling strategy for training the RL component and combine it with supervised sequential learning.
Based on sampled (negative) actions (items), we can calculate the "advantage" of a positive action over the average case.
We instantiate SNQN and SA2C with four state-of-the-art sequential recommendation models and conduct experiments on two real-world datasets.
arXiv Detail & Related papers (2021-11-05T12:51:15Z) - Deep Exploration for Recommendation Systems [14.937000494745861]
We develop deep exploration methods for recommendation systems.
In particular, we formulate recommendation as a sequential decision problem.
Our experiments are carried out with high-fidelity industrial-grade simulators.
arXiv Detail & Related papers (2021-09-26T06:54:26Z) - Contrastive Learning for Debiased Candidate Generation in Large-Scale
Recommender Systems [84.3996727203154]
We show that a popular choice of contrastive loss is equivalent to reducing the exposure bias via inverse propensity weighting.
We further improve upon CLRec and propose Multi-CLRec, for accurate multi-intention aware bias reduction.
Our methods have been successfully deployed in Taobao, where at least four-month online A/B tests and offline analyses demonstrate its substantial improvements.
arXiv Detail & Related papers (2020-05-20T08:15:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.