Let It Go? Not Quite: Addressing Item Cold Start in Sequential Recommendations with Content-Based Initialization
- URL: http://arxiv.org/abs/2507.19473v1
- Date: Fri, 25 Jul 2025 17:57:31 GMT
- Title: Let It Go? Not Quite: Addressing Item Cold Start in Sequential Recommendations with Content-Based Initialization
- Authors: Anton Pembek, Artem Fatkulin, Anton Klenitskiy, Alexey Vasilev,
- Abstract summary: We introduce a small trainable delta to frozen embeddings that enables the model to adapt item representations without letting them go too far from their original semantic structure.<n>This approach demonstrates consistent improvements across multiple datasets and modalities.
- Score: 0.8437187555622164
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Many sequential recommender systems suffer from the cold start problem, where items with few or no interactions cannot be effectively used by the model due to the absence of a trained embedding. Content-based approaches, which leverage item metadata, are commonly used in such scenarios. One possible way is to use embeddings derived from content features such as textual descriptions as initialization for the model embeddings. However, directly using frozen content embeddings often results in suboptimal performance, as they may not fully adapt to the recommendation task. On the other hand, fine-tuning these embeddings can degrade performance for cold-start items, as item representations may drift far from their original structure after training. We propose a novel approach to address this limitation. Instead of entirely freezing the content embeddings or fine-tuning them extensively, we introduce a small trainable delta to frozen embeddings that enables the model to adapt item representations without letting them go too far from their original semantic structure. This approach demonstrates consistent improvements across multiple datasets and modalities, including e-commerce datasets with textual descriptions and a music dataset with audio-based representation.
Related papers
- Language-Model Prior Overcomes Cold-Start Items [14.370472820496802]
The growth ofRecSys is driven by digitization and the need for personalized content in areas such as e-commerce and video streaming.
Existing solutions for the cold-start problem, such as content-based recommenders and hybrid methods, leverage item metadata to determine item similarities.
This paper introduces a novel approach for cold-start item recommendation that utilizes the language model (LM) to estimate item similarities.
arXiv Detail & Related papers (2024-11-13T22:45:52Z) - Firzen: Firing Strict Cold-Start Items with Frozen Heterogeneous and Homogeneous Graphs for Recommendation [34.414081170244955]
We propose a unified framework incorporating multi-modal content of items and knowledge graphs (KGs) to solve both strict cold-start and warm-start recommendation.
Our model yields significant improvements for strict cold-start recommendation and outperforms or matches the state-of-the-art performance in the warm-start scenario.
arXiv Detail & Related papers (2024-10-10T06:48:27Z) - Learning Multi-Aspect Item Palette: A Semantic Tokenization Framework for Generative Recommendation [55.99632509895994]
We introduce LAMIA, a novel approach for multi-aspect semantic tokenization.<n>Unlike RQ-VAE, which uses a single embedding, LAMIA learns an item palette''--a collection of independent and semantically parallel embeddings.<n>Our results demonstrate significant improvements in recommendation accuracy over existing methods.
arXiv Detail & Related papers (2024-09-11T13:49:48Z) - Text Matching Improves Sequential Recommendation by Reducing Popularity
Biases [48.272381505993366]
TASTE verbalizes items and user-item interactions using identifiers and attributes of items.
Our experiments show that TASTE outperforms the state-of-the-art methods on widely used sequential recommendation datasets.
arXiv Detail & Related papers (2023-08-27T07:44:33Z) - Multi-task Item-attribute Graph Pre-training for Strict Cold-start Item
Recommendation [71.5871100348448]
ColdGPT models item-attribute correlations into an item-attribute graph by extracting fine-grained attributes from item contents.
ColdGPT transfers knowledge into the item-attribute graph from various available data sources, i.e., item contents, historical purchase sequences, and review texts of the existing items.
Extensive experiments show that ColdGPT consistently outperforms the existing SCS recommenders by large margins.
arXiv Detail & Related papers (2023-06-26T07:04:47Z) - FELRec: Efficient Handling of Item Cold-Start With Dynamic Representation Learning in Recommender Systems [0.0]
We present FELRec, a large embedding network that refines the existing representations of users and items.
In contrast to similar approaches, our model represents new users and items without side information and time-consuming finetuning.
Our proposed model generalizes well to previously unseen datasets in zero-shot settings.
arXiv Detail & Related papers (2022-10-30T19:08:38Z) - Prompt-Matched Semantic Segmentation [96.99924127527002]
The objective of this work is to explore how to effectively adapt pre-trained foundation models to various downstream tasks of image semantic segmentation.
We propose a novel Inter-Stage Prompt-Matched Framework, which maintains the original structure of the foundation model while generating visual prompts adaptively for task-oriented tuning.
A lightweight module termed Semantic-aware Prompt Matcher is then introduced to hierarchically interpolate between two stages to learn reasonable prompts for each specific task.
arXiv Detail & Related papers (2022-08-22T09:12:53Z) - Efficient Few-Shot Fine-Tuning for Opinion Summarization [83.76460801568092]
Abstractive summarization models are typically pre-trained on large amounts of generic texts, then fine-tuned on tens or hundreds of thousands of annotated samples.
We show that a few-shot method based on adapters can easily store in-domain knowledge.
We show that this self-supervised adapter pre-training improves summary quality over standard fine-tuning by 2.0 and 1.3 ROUGE-L points on the Amazon and Yelp datasets.
arXiv Detail & Related papers (2022-05-04T16:38:37Z) - Sequential Recommendation via Stochastic Self-Attention [68.52192964559829]
Transformer-based approaches embed items as vectors and use dot-product self-attention to measure the relationship between items.
We propose a novel textbfSTOchastic textbfSelf-textbfAttention(STOSA) to overcome these issues.
We devise a novel Wasserstein Self-Attention module to characterize item-item position-wise relationships in sequences.
arXiv Detail & Related papers (2022-01-16T12:38:45Z) - Privileged Graph Distillation for Cold Start Recommendation [57.918041397089254]
The cold start problem in recommender systems requires recommending to new users (items) based on attributes without any historical interaction records.
We propose a privileged graph distillation model(PGD)
Our proposed model is generally applicable to different cold start scenarios with new user, new item, or new user-new item.
arXiv Detail & Related papers (2021-05-31T14:05:27Z) - Cold-start Sequential Recommendation via Meta Learner [10.491428090228768]
We propose a Meta-learning-based Cold-Start Sequential Recommendation Framework, namely Mecos, to mitigate the item cold-start problem in sequential recommendation.
Mecos effectively extracts user preference from limited interactions and learns to match the target cold-start item with the potential user.
arXiv Detail & Related papers (2020-12-10T05:23:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.