Multi-Tower Multi-Interest Recommendation with User Representation Repel
- URL: http://arxiv.org/abs/2403.05122v2
- Date: Wed, 31 Jul 2024 04:58:56 GMT
- Title: Multi-Tower Multi-Interest Recommendation with User Representation Repel
- Authors: Tianyu Xiong, Xiaohan Yu,
- Abstract summary: We propose a novel multi-tower multi-interest framework with user representation repel.
Experimental results across multiple large-scale industrial datasets proved the effectiveness and generalizability of our proposed framework.
- Score: 0.9867914513513453
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the era of information overload, the value of recommender systems has been profoundly recognized in academia and industry alike. Multi-interest sequential recommendation, in particular, is a subfield that has been receiving increasing attention in recent years. By generating multiple-user representations, multi-interest learning models demonstrate superior expressiveness than single-user representation models, both theoretically and empirically. Despite major advancements in the field, three major issues continue to plague the performance and adoptability of multi-interest learning methods, the difference between training and deployment objectives, the inability to access item information, and the difficulty of industrial adoption due to its single-tower architecture. We address these challenges by proposing a novel multi-tower multi-interest framework with user representation repel. Experimental results across multiple large-scale industrial datasets proved the effectiveness and generalizability of our proposed framework.
Related papers
- Enhancing Taobao Display Advertising with Multimodal Representations: Challenges, Approaches and Insights [38.59216578324812]
We explore approaches to leverage multimodal data to enhance the recommendation accuracy.
We introduce a two-phase framework, including the pre-training of multimodal representations and the integration of these representations with existing ID-based models.
Since the integration of multimodal representations in mid-2023, we have observed significant performance improvements in Taobao display advertising system.
arXiv Detail & Related papers (2024-07-28T11:36:47Z) - BiVRec: Bidirectional View-based Multimodal Sequential Recommendation [55.87443627659778]
We propose an innovative framework, BivRec, that jointly trains the recommendation tasks in both ID and multimodal views.
BivRec achieves state-of-the-art performance on five datasets and showcases various practical advantages.
arXiv Detail & Related papers (2024-02-27T09:10:41Z) - Generative Multimodal Models are In-Context Learners [60.50927925426832]
We introduce Emu2, a generative multimodal model with 37 billion parameters, trained on large-scale multimodal sequences.
Emu2 exhibits strong multimodal in-context learning abilities, even emerging to solve tasks that require on-the-fly reasoning.
arXiv Detail & Related papers (2023-12-20T18:59:58Z) - Robust Representation Learning for Unified Online Top-K Recommendation [39.12191494863331]
We propose a robust representation learning for the unified online top-k recommendation.
Our approach constructs unified modeling in entity space to ensure data fairness.
The proposed method has been successfully deployed online to serve real business scenarios.
arXiv Detail & Related papers (2023-10-24T03:42:20Z) - Decoupling Common and Unique Representations for Multimodal Self-supervised Learning [22.12729786091061]
We propose Decoupling Common and Unique Representations (DeCUR), a simple yet effective method for multimodal self-supervised learning.
By distinguishing inter- and intra-modal embeddings through multimodal redundancy reduction, DeCUR can integrate complementary information across different modalities.
arXiv Detail & Related papers (2023-09-11T08:35:23Z) - Diversity Regularized Interests Modeling for Recommender Systems [25.339169652217844]
We propose a novel method of Diversity Regularized Interests Modeling (DRIM) for Recommender Systems.
Each interest of the user should have a certain degree of distinction, thus we introduce three strategies as the diversity regularized separator to separate multiple user interest vectors.
arXiv Detail & Related papers (2021-03-23T09:10:37Z) - Personalized Multimodal Feedback Generation in Education [50.95346877192268]
The automatic evaluation for school assignments is an important application of AI in the education field.
We propose a novel Personalized Multimodal Feedback Generation Network (PMFGN) armed with a modality gate mechanism and a personalized bias mechanism.
Our model significantly outperforms several baselines by generating more accurate and diverse feedback.
arXiv Detail & Related papers (2020-10-31T05:26:49Z) - Explainable Recommender Systems via Resolving Learning Representations [57.24565012731325]
Explanations could help improve user experience and discover system defects.
We propose a novel explainable recommendation model through improving the transparency of the representation learning process.
arXiv Detail & Related papers (2020-08-21T05:30:48Z) - Embedded Deep Bilinear Interactive Information and Selective Fusion for
Multi-view Learning [70.67092105994598]
We propose a novel multi-view learning framework to make the multi-view classification better aimed at the above-mentioned two aspects.
In particular, we train different deep neural networks to learn various intra-view representations.
Experiments on six publicly available datasets demonstrate the effectiveness of the proposed method.
arXiv Detail & Related papers (2020-07-13T01:13:23Z) - Relating by Contrasting: A Data-efficient Framework for Multimodal
Generative Models [86.9292779620645]
We develop a contrastive framework for generative model learning, allowing us to train the model not just by the commonality between modalities, but by the distinction between "related" and "unrelated" multimodal data.
Under our proposed framework, the generative model can accurately identify related samples from unrelated ones, making it possible to make use of the plentiful unlabeled, unpaired multimodal data.
arXiv Detail & Related papers (2020-07-02T15:08:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.