Dynamic Embedding Size Search with Minimum Regret for Streaming
Recommender System
- URL: http://arxiv.org/abs/2308.07760v1
- Date: Tue, 15 Aug 2023 13:27:18 GMT
- Title: Dynamic Embedding Size Search with Minimum Regret for Streaming
Recommender System
- Authors: Bowei He, Xu He, Renrui Zhang, Yingxue Zhang, Ruiming Tang, Chen Ma
- Abstract summary: We show that setting an identical and static embedding size is sub-optimal in terms of recommendation performance and memory cost.
We propose a method to minimize the embedding size selection regret on both user and item sides in a non-stationary manner.
- Score: 39.78277554870799
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the continuous increase of users and items, conventional recommender
systems trained on static datasets can hardly adapt to changing environments.
The high-throughput data requires the model to be updated in a timely manner
for capturing the user interest dynamics, which leads to the emergence of
streaming recommender systems. Due to the prevalence of deep learning-based
recommender systems, the embedding layer is widely adopted to represent the
characteristics of users, items, and other features in low-dimensional vectors.
However, it has been proved that setting an identical and static embedding size
is sub-optimal in terms of recommendation performance and memory cost,
especially for streaming recommendations. To tackle this problem, we first
rethink the streaming model update process and model the dynamic embedding size
search as a bandit problem. Then, we analyze and quantify the factors that
influence the optimal embedding sizes from the statistics perspective. Based on
this, we propose the \textbf{D}ynamic \textbf{E}mbedding \textbf{S}ize
\textbf{S}earch (\textbf{DESS}) method to minimize the embedding size selection
regret on both user and item sides in a non-stationary manner. Theoretically,
we obtain a sublinear regret upper bound superior to previous methods.
Empirical results across two recommendation tasks on four public datasets also
demonstrate that our approach can achieve better streaming recommendation
performance with lower memory cost and higher time efficiency.
Related papers
- Large-scale Fully-Unsupervised Re-Identification [78.47108158030213]
We propose two strategies to learn from large-scale unlabeled data.
The first strategy performs a local neighborhood sampling to reduce the dataset size in each without violating neighborhood relationships.
A second strategy leverages a novel Re-Ranking technique, which has a lower time upper bound complexity and reduces the memory complexity from O(n2) to O(kn) with k n.
arXiv Detail & Related papers (2023-07-26T16:19:19Z) - Mem-Rec: Memory Efficient Recommendation System using Alternative
Representation [6.542635536704625]
MEM-REC is a novel alternative representation approach for embedding tables.
We show that MEM-REC can not only maintain the recommendation quality but can also improve the embedding latency.
arXiv Detail & Related papers (2023-05-12T02:36:07Z) - Modeling Dynamic User Preference via Dictionary Learning for Sequential
Recommendation [133.8758914874593]
Capturing the dynamics in user preference is crucial to better predict user future behaviors because user preferences often drift over time.
Many existing recommendation algorithms -- including both shallow and deep ones -- often model such dynamics independently.
This paper considers the problem of embedding a user's sequential behavior into the latent space of user preferences.
arXiv Detail & Related papers (2022-04-02T03:23:46Z) - Bayesian Non-stationary Linear Bandits for Large-Scale Recommender
Systems [6.009759445555003]
We build upon the linear contextual multi-armed bandit framework to address this problem.
We develop a decision-making policy for a linear bandit problem with high-dimensional feature vectors.
Our proposed recommender system employs this policy to learn the users' item preferences online while minimizing runtime.
arXiv Detail & Related papers (2022-02-07T13:51:19Z) - Learning to Learn a Cold-start Sequential Recommender [70.5692886883067]
The cold-start recommendation is an urgent problem in contemporary online applications.
We propose a meta-learning based cold-start sequential recommendation framework called metaCSR.
metaCSR holds the ability to learn the common patterns from regular users' behaviors.
arXiv Detail & Related papers (2021-10-18T08:11:24Z) - PURS: Personalized Unexpected Recommender System for Improving User
Satisfaction [76.98616102965023]
We describe a novel Personalized Unexpected Recommender System (PURS) model that incorporates unexpectedness into the recommendation process.
Extensive offline experiments on three real-world datasets illustrate that the proposed PURS model significantly outperforms the state-of-the-art baseline approaches.
arXiv Detail & Related papers (2021-06-05T01:33:21Z) - Differentiable Neural Input Search for Recommender Systems [26.88124270897381]
Differentiable Neural Input Search (DNIS) is a method that searches for mixed feature embedding dimensions in a more flexible space.
DNIS is model-agnostic and can be seamlessly incorporated with existing latent factor models for recommendation.
arXiv Detail & Related papers (2020-06-08T10:43:59Z) - A Generic Network Compression Framework for Sequential Recommender
Systems [71.81962915192022]
Sequential recommender systems (SRS) have become the key technology in capturing user's dynamic interests and generating high-quality recommendations.
We propose a compressed sequential recommendation framework, termed as CpRec, where two generic model shrinking techniques are employed.
By the extensive ablation studies, we demonstrate that the proposed CpRec can achieve up to 4$sim$8 times compression rates in real-world SRS datasets.
arXiv Detail & Related papers (2020-04-21T08:40:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.