General Item Representation Learning for Cold-start Content Recommendations
- URL: http://arxiv.org/abs/2404.13808v1
- Date: Mon, 22 Apr 2024 00:48:56 GMT
- Title: General Item Representation Learning for Cold-start Content Recommendations
- Authors: Jooeun Kim, Jinri Kim, Kwangeun Yeo, Eungi Kim, Kyoung-Woon On, Jonghwan Mun, Joonseok Lee,
- Abstract summary: We propose a domain/data-agnostic item representation learning framework for cold-start recommendations.
Our proposed model is end-to-end trainable completely free from classification labels.
- Score: 12.729624639270405
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Cold-start item recommendation is a long-standing challenge in recommendation systems. A common remedy is to use a content-based approach, but rich information from raw contents in various forms has not been fully utilized. In this paper, we propose a domain/data-agnostic item representation learning framework for cold-start recommendations, naturally equipped with multimodal alignment among various features by adopting a Transformer-based architecture. Our proposed model is end-to-end trainable completely free from classification labels, not just costly to collect but suboptimal for recommendation-purpose representation learning. From extensive experiments on real-world movie and news recommendation benchmarks, we verify that our approach better preserves fine-grained user taste than state-of-the-art baselines, universally applicable to multiple domains at large scale.
Related papers
- Language-Model Prior Overcomes Cold-Start Items [14.370472820496802]
The growth ofRecSys is driven by digitization and the need for personalized content in areas such as e-commerce and video streaming.
Existing solutions for the cold-start problem, such as content-based recommenders and hybrid methods, leverage item metadata to determine item similarities.
This paper introduces a novel approach for cold-start item recommendation that utilizes the language model (LM) to estimate item similarities.
arXiv Detail & Related papers (2024-11-13T22:45:52Z) - How to Diversify any Personalized Recommender? A User-centric Pre-processing approach [0.0]
We introduce a novel approach to improve the diversity of Top-N recommendations while maintaining recommendation performance.
Our approach employs a user-centric pre-processing strategy aimed at exposing users to a wide array of content categories and topics.
arXiv Detail & Related papers (2024-05-03T15:02:55Z) - Ada-Retrieval: An Adaptive Multi-Round Retrieval Paradigm for Sequential
Recommendations [50.03560306423678]
We propose Ada-Retrieval, an adaptive multi-round retrieval paradigm for recommender systems.
Ada-Retrieval iteratively refines user representations to better capture potential candidates in the full item space.
arXiv Detail & Related papers (2024-01-12T15:26:40Z) - Reformulating Sequential Recommendation: Learning Dynamic User Interest with Content-enriched Language Modeling [18.297332953450514]
We propose LANCER, which leverages the semantic understanding capabilities of pre-trained language models to generate personalized recommendations.
Our approach bridges the gap between language models and recommender systems, resulting in more human-like recommendations.
arXiv Detail & Related papers (2023-09-19T08:54:47Z) - RecRec: Algorithmic Recourse for Recommender Systems [41.97186998947909]
It is crucial for all stakeholders to understand the model's rationale behind making certain predictions and recommendations.
This is especially true for the content providers whose livelihoods depend on the recommender system.
We propose a recourse framework for recommender systems, targeted towards the content providers.
arXiv Detail & Related papers (2023-08-28T22:26:50Z) - Towards Universal Sequence Representation Learning for Recommender
Systems [98.02154164251846]
We present a novel universal sequence representation learning approach, named UniSRec.
The proposed approach utilizes the associated description text of items to learn transferable representations across different recommendation scenarios.
Our approach can be effectively transferred to new recommendation domains or platforms in a parameter-efficient way.
arXiv Detail & Related papers (2022-06-13T07:21:56Z) - Diverse Preference Augmentation with Multiple Domains for Cold-start
Recommendations [92.47380209981348]
We propose a Diverse Preference Augmentation framework with multiple source domains based on meta-learning.
We generate diverse ratings in a new domain of interest to handle overfitting on the case of sparse interactions.
These ratings are introduced into the meta-training procedure to learn a preference meta-learner, which produces good generalization ability.
arXiv Detail & Related papers (2022-04-01T10:10:50Z) - Learning to Learn a Cold-start Sequential Recommender [70.5692886883067]
The cold-start recommendation is an urgent problem in contemporary online applications.
We propose a meta-learning based cold-start sequential recommendation framework called metaCSR.
metaCSR holds the ability to learn the common patterns from regular users' behaviors.
arXiv Detail & Related papers (2021-10-18T08:11:24Z) - PinnerSage: Multi-Modal User Embedding Framework for Recommendations at
Pinterest [54.56236567783225]
PinnerSage is an end-to-end recommender system that represents each user via multi-modal embeddings.
We conduct several offline and online A/B experiments to show that our method significantly outperforms single embedding methods.
arXiv Detail & Related papers (2020-07-07T17:13:20Z) - Controllable Multi-Interest Framework for Recommendation [64.30030600415654]
We formalize the recommender system as a sequential recommendation problem.
We propose a novel controllable multi-interest framework for the sequential recommendation, called ComiRec.
Our framework has been successfully deployed on the offline Alibaba distributed cloud platform.
arXiv Detail & Related papers (2020-05-19T10:18:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.