Item Cold Start Recommendation via Adversarial Variational Auto-encoder
Warm-up
- URL: http://arxiv.org/abs/2302.14395v1
- Date: Tue, 28 Feb 2023 08:23:15 GMT
- Title: Item Cold Start Recommendation via Adversarial Variational Auto-encoder
Warm-up
- Authors: Shenzheng Zhang, Qi Tan, Xinzhi Zheng, Yi Ren, Xu Zhao
- Abstract summary: We propose an Adversarial Variational Auto-encoder Warm-up model (AVAEW) to generate warm-up item ID embedding for cold items.
We demonstrate the effectiveness and compatibility of the proposed method by extensive offline experiments on public datasets and online A/B tests on a real-world news recommendation platform.
- Score: 18.923299235862974
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The gap between the randomly initialized item ID embedding and the
well-trained warm item ID embedding makes the cold items hard to suit the
recommendation system, which is trained on the data of historical warm items.
To alleviate the performance decline of new items recommendation, the
distribution of the new item ID embedding should be close to that of the
historical warm items. To achieve this goal, we propose an Adversarial
Variational Auto-encoder Warm-up model (AVAEW) to generate warm-up item ID
embedding for cold items. Specifically, we develop a conditional variational
auto-encoder model to leverage the side information of items for generating the
warm-up item ID embedding. Particularly, we introduce an adversarial module to
enforce the alignment between warm-up item ID embedding distribution and
historical item ID embedding distribution. We demonstrate the effectiveness and
compatibility of the proposed method by extensive offline experiments on public
datasets and online A/B tests on a real-world large-scale news recommendation
platform.
Related papers
- FEASE: Shallow AutoEncoding Recommender with Cold Start Handling via Side Features [2.8680286413498903]
User and item cold starts present significant challenges in industrial applications of recommendation systems.
We introduce an augmented EASE model, i.e. FEASE, that seamlessly integrates both user and item side information.
Our method strikes a balance by effectively recommending cold start items and handling cold start users without incurring extra bias.
arXiv Detail & Related papers (2025-04-03T05:27:55Z) - Prompt Tuning for Item Cold-start Recommendation [21.073232866618554]
The item cold-start problem is crucial for online recommender systems, as the success of the cold-start phase determines whether items can transition into popular ones.
Prompt learning, a powerful technique used in natural language processing (NLP) to address zero- or few-shot problems, has been adapted for recommender systems to tackle similar challenges.
We propose to leverage high-value positive feedback, termed pinnacle feedback as prompt information, to simultaneously resolve the above two problems.
arXiv Detail & Related papers (2024-12-24T01:38:19Z) - Language-Model Prior Overcomes Cold-Start Items [14.370472820496802]
The growth ofRecSys is driven by digitization and the need for personalized content in areas such as e-commerce and video streaming.
Existing solutions for the cold-start problem, such as content-based recommenders and hybrid methods, leverage item metadata to determine item similarities.
This paper introduces a novel approach for cold-start item recommendation that utilizes the language model (LM) to estimate item similarities.
arXiv Detail & Related papers (2024-11-13T22:45:52Z) - Firzen: Firing Strict Cold-Start Items with Frozen Heterogeneous and Homogeneous Graphs for Recommendation [34.414081170244955]
We propose a unified framework incorporating multi-modal content of items and knowledge graphs (KGs) to solve both strict cold-start and warm-start recommendation.
Our model yields significant improvements for strict cold-start recommendation and outperforms or matches the state-of-the-art performance in the warm-start scenario.
arXiv Detail & Related papers (2024-10-10T06:48:27Z) - ID-centric Pre-training for Recommendation [51.72177873832969]
ID embeddings are challenging to be transferred to new domains.
behavioral information in ID embeddings is still verified to be dominating in PLM-based recommendation models.
We propose a novel ID-centric recommendation pre-training paradigm (IDP), which directly transfers informative ID embeddings learned in pre-training domains to item representations in new domains.
arXiv Detail & Related papers (2024-05-06T15:34:31Z) - MMGRec: Multimodal Generative Recommendation with Transformer Model [81.61896141495144]
MMGRec aims to introduce a generative paradigm into multimodal recommendation.
We first devise a hierarchical quantization method Graph CF-RQVAE to assign Rec-ID for each item from its multimodal information.
We then train a Transformer-based recommender to generate the Rec-IDs of user-preferred items based on historical interaction sequences.
arXiv Detail & Related papers (2024-04-25T12:11:27Z) - MISSRec: Pre-training and Transferring Multi-modal Interest-aware
Sequence Representation for Recommendation [61.45986275328629]
We propose MISSRec, a multi-modal pre-training and transfer learning framework for sequential recommendation.
On the user side, we design a Transformer-based encoder-decoder model, where the contextual encoder learns to capture the sequence-level multi-modal user interests.
On the candidate item side, we adopt a dynamic fusion module to produce user-adaptive item representation.
arXiv Detail & Related papers (2023-08-22T04:06:56Z) - Multi-task Item-attribute Graph Pre-training for Strict Cold-start Item
Recommendation [71.5871100348448]
ColdGPT models item-attribute correlations into an item-attribute graph by extracting fine-grained attributes from item contents.
ColdGPT transfers knowledge into the item-attribute graph from various available data sources, i.e., item contents, historical purchase sequences, and review texts of the existing items.
Extensive experiments show that ColdGPT consistently outperforms the existing SCS recommenders by large margins.
arXiv Detail & Related papers (2023-06-26T07:04:47Z) - Recommender Systems with Generative Retrieval [58.454606442670034]
We propose a novel generative retrieval approach, where the retrieval model autoregressively decodes the identifiers of the target candidates.
To that end, we create semantically meaningful of codewords to serve as a Semantic ID for each item.
We show that recommender systems trained with the proposed paradigm significantly outperform the current SOTA models on various datasets.
arXiv Detail & Related papers (2023-05-08T21:48:17Z) - FELRec: Efficient Handling of Item Cold-Start With Dynamic Representation Learning in Recommender Systems [0.0]
We present FELRec, a large embedding network that refines the existing representations of users and items.
In contrast to similar approaches, our model represents new users and items without side information and time-consuming finetuning.
Our proposed model generalizes well to previously unseen datasets in zero-shot settings.
arXiv Detail & Related papers (2022-10-30T19:08:38Z) - Sequential Recommendation via Stochastic Self-Attention [68.52192964559829]
Transformer-based approaches embed items as vectors and use dot-product self-attention to measure the relationship between items.
We propose a novel textbfSTOchastic textbfSelf-textbfAttention(STOSA) to overcome these issues.
We devise a novel Wasserstein Self-Attention module to characterize item-item position-wise relationships in sequences.
arXiv Detail & Related papers (2022-01-16T12:38:45Z) - Cold Item Integration in Deep Hybrid Recommenders via Tunable Stochastic
Gates [19.69804455785047]
A major challenge in collaborative filtering methods is how to produce recommendations for cold items.
We propose a novel hybrid recommendation algorithm that bridges these two conflicting objectives.
We demonstrate the effectiveness of the proposed algorithm on movies, apps, and articles recommendations.
arXiv Detail & Related papers (2021-12-12T11:37:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.