Dynamic Embedding Size Search with Minimum Regret for Streaming
Recommender System
- URL: http://arxiv.org/abs/2308.07760v1
- Date: Tue, 15 Aug 2023 13:27:18 GMT
- Title: Dynamic Embedding Size Search with Minimum Regret for Streaming
Recommender System
- Authors: Bowei He, Xu He, Renrui Zhang, Yingxue Zhang, Ruiming Tang, Chen Ma
- Abstract summary: We show that setting an identical and static embedding size is sub-optimal in terms of recommendation performance and memory cost.
We propose a method to minimize the embedding size selection regret on both user and item sides in a non-stationary manner.
- Score: 39.78277554870799
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the continuous increase of users and items, conventional recommender
systems trained on static datasets can hardly adapt to changing environments.
The high-throughput data requires the model to be updated in a timely manner
for capturing the user interest dynamics, which leads to the emergence of
streaming recommender systems. Due to the prevalence of deep learning-based
recommender systems, the embedding layer is widely adopted to represent the
characteristics of users, items, and other features in low-dimensional vectors.
However, it has been proved that setting an identical and static embedding size
is sub-optimal in terms of recommendation performance and memory cost,
especially for streaming recommendations. To tackle this problem, we first
rethink the streaming model update process and model the dynamic embedding size
search as a bandit problem. Then, we analyze and quantify the factors that
influence the optimal embedding sizes from the statistics perspective. Based on
this, we propose the \textbf{D}ynamic \textbf{E}mbedding \textbf{S}ize
\textbf{S}earch (\textbf{DESS}) method to minimize the embedding size selection
regret on both user and item sides in a non-stationary manner. Theoretically,
we obtain a sublinear regret upper bound superior to previous methods.
Empirical results across two recommendation tasks on four public datasets also
demonstrate that our approach can achieve better streaming recommendation
performance with lower memory cost and higher time efficiency.
Related papers
- A Recommendation Model Utilizing Separation Embedding and Self-Attention for Feature Mining [7.523158123940574]
Recommendation systems provide users with content that meets their needs.
Traditional click-through rate prediction and TOP-K recommendation mechanisms are unable to meet the recommendations needs.
This paper proposes a recommendations system model based on a separation embedding cross-network.
arXiv Detail & Related papers (2024-10-19T07:49:21Z) - Bridging User Dynamics: Transforming Sequential Recommendations with Schrödinger Bridge and Diffusion Models [49.458914600467324]
We introduce the Schr"odinger Bridge into diffusion-based sequential recommendation models, creating the SdifRec model.
We also propose an extended version of SdifRec called con-SdifRec, which utilizes user clustering information as a guiding condition.
arXiv Detail & Related papers (2024-08-30T09:10:38Z) - Embedding Compression in Recommender Systems: A Survey [44.949824174769]
We introduce deep learning recommendation models and the basic concept of embedding compression in recommender systems.
We systematically organize existing approaches into three categories, namely low-precision, mixed-dimension, and weight-sharing.
arXiv Detail & Related papers (2024-08-05T08:30:16Z) - Scalable Dynamic Embedding Size Search for Streaming Recommendation [54.28404337601801]
Real-world recommender systems often operate in streaming recommendation scenarios.
Number of users and items continues to grow, leading to substantial storage resource consumption.
We learn Lightweight Embeddings for streaming recommendation, called SCALL, which can adaptively adjust the embedding sizes of users/items.
arXiv Detail & Related papers (2024-07-22T06:37:24Z) - Large-scale Fully-Unsupervised Re-Identification [78.47108158030213]
We propose two strategies to learn from large-scale unlabeled data.
The first strategy performs a local neighborhood sampling to reduce the dataset size in each without violating neighborhood relationships.
A second strategy leverages a novel Re-Ranking technique, which has a lower time upper bound complexity and reduces the memory complexity from O(n2) to O(kn) with k n.
arXiv Detail & Related papers (2023-07-26T16:19:19Z) - Modeling Dynamic User Preference via Dictionary Learning for Sequential
Recommendation [133.8758914874593]
Capturing the dynamics in user preference is crucial to better predict user future behaviors because user preferences often drift over time.
Many existing recommendation algorithms -- including both shallow and deep ones -- often model such dynamics independently.
This paper considers the problem of embedding a user's sequential behavior into the latent space of user preferences.
arXiv Detail & Related papers (2022-04-02T03:23:46Z) - Bayesian Non-stationary Linear Bandits for Large-Scale Recommender
Systems [6.009759445555003]
We build upon the linear contextual multi-armed bandit framework to address this problem.
We develop a decision-making policy for a linear bandit problem with high-dimensional feature vectors.
Our proposed recommender system employs this policy to learn the users' item preferences online while minimizing runtime.
arXiv Detail & Related papers (2022-02-07T13:51:19Z) - A Generic Network Compression Framework for Sequential Recommender
Systems [71.81962915192022]
Sequential recommender systems (SRS) have become the key technology in capturing user's dynamic interests and generating high-quality recommendations.
We propose a compressed sequential recommendation framework, termed as CpRec, where two generic model shrinking techniques are employed.
By the extensive ablation studies, we demonstrate that the proposed CpRec can achieve up to 4$sim$8 times compression rates in real-world SRS datasets.
arXiv Detail & Related papers (2020-04-21T08:40:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.