MultiHead MultiModal Deep Interest Recommendation Network
- URL: http://arxiv.org/abs/2110.10205v1
- Date: Tue, 19 Oct 2021 18:59:02 GMT
- Title: MultiHead MultiModal Deep Interest Recommendation Network
- Authors: Mingbao Yang, ShaoBo Li, Zhou Peng, Ansi Zhang, Yuanmeng Zhang
- Abstract summary: This paper adds multi-head and multi-modal modules to the DINciteAuthors01 model.
Experiments show that the multi-head multi-modal DIN improves the recommendation prediction effect, and outperforms current state-of-the-art methods on various comprehensive indicators.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the development of information technology, human beings are constantly
producing a large amount of information at all times. How to obtain the
information that users are interested in from the large amount of information
has become an issue of great concern to users and even business managers. In
order to solve this problem, from traditional machine learning to deep learning
recommendation systems, researchers continue to improve optimization models and
explore solutions. Because researchers have optimized more on the
recommendation model network structure, they have less research on enriching
recommendation model features, and there is still room for in-depth
recommendation model optimization. Based on the DIN\cite{Authors01} model, this
paper adds multi-head and multi-modal modules, which enriches the feature sets
that the model can use, and at the same time strengthens the cross-combination
and fitting capabilities of the model. Experiments show that the multi-head
multi-modal DIN improves the recommendation prediction effect, and outperforms
current state-of-the-art methods on various comprehensive indicators.
Related papers
- Generative Large Recommendation Models: Emerging Trends in LLMs for Recommendation [85.52251362906418]
This tutorial explores two primary approaches for integrating large language models (LLMs)
It provides a comprehensive overview of generative large recommendation models, including their recent advancements, challenges, and potential research directions.
Key topics include data quality, scaling laws, user behavior mining, and efficiency in training and inference.
arXiv Detail & Related papers (2025-02-19T14:48:25Z) - Enhancing Healthcare Recommendation Systems with a Multimodal LLMs-based MOE Architecture [4.429093762434193]
We build a small dataset for recommending healthy food based on patient descriptions.
We evaluate the model's performance on several key metrics, including Precision, Recall, NDCG, and MAP@5.
The paper finds image data provided relatively limited improvement in the performance of the personalized recommendation system.
arXiv Detail & Related papers (2024-12-16T08:42:43Z) - Scaling New Frontiers: Insights into Large Recommendation Models [74.77410470984168]
Meta's generative recommendation model HSTU illustrates the scaling laws of recommendation systems by expanding parameters to thousands of billions.
We conduct comprehensive ablation studies to explore the origins of these scaling laws.
We offer insights into future directions for large recommendation models.
arXiv Detail & Related papers (2024-12-01T07:27:20Z) - A Collaborative Ensemble Framework for CTR Prediction [73.59868761656317]
We propose a novel framework, Collaborative Ensemble Training Network (CETNet), to leverage multiple distinct models.
Unlike naive model scaling, our approach emphasizes diversity and collaboration through collaborative learning.
We validate our framework on three public datasets and a large-scale industrial dataset from Meta.
arXiv Detail & Related papers (2024-11-20T20:38:56Z) - MMREC: LLM Based Multi-Modal Recommender System [2.3113916776957635]
This paper presents a novel approach to enhancing recommender systems by leveraging Large Language Models (LLMs) and deep learning techniques.
The proposed framework aims to improve the accuracy and relevance of recommendations by incorporating multi-modal information processing and by the use of unified latent space representation.
arXiv Detail & Related papers (2024-08-08T04:31:29Z) - Data-Juicer Sandbox: A Feedback-Driven Suite for Multimodal Data-Model Co-development [67.55944651679864]
We present a new sandbox suite tailored for integrated data-model co-development.
This sandbox provides a feedback-driven experimental platform, enabling cost-effective and guided refinement of both data and models.
arXiv Detail & Related papers (2024-07-16T14:40:07Z) - DiffMM: Multi-Modal Diffusion Model for Recommendation [19.43775593283657]
We propose a novel multi-modal graph diffusion model for recommendation called DiffMM.
Our framework integrates a modality-aware graph diffusion model with a cross-modal contrastive learning paradigm to improve modality-aware user representation learning.
arXiv Detail & Related papers (2024-06-17T17:35:54Z) - ISR-DPO: Aligning Large Multimodal Models for Videos by Iterative Self-Retrospective DPO [36.69910114305134]
We propose Iterative Self-Retrospective Direct Preference Optimization (ISR-DPO) to enhance preference modeling.
ISR-DPO enhances the self-judge's focus on informative video regions, resulting in more visually grounded preferences.
In extensive empirical evaluations, the ISR-DPO significantly outperforms the state of the art.
arXiv Detail & Related papers (2024-06-17T07:33:30Z) - Mirror Gradient: Towards Robust Multimodal Recommender Systems via
Exploring Flat Local Minima [54.06000767038741]
We analyze multimodal recommender systems from the novel perspective of flat local minima.
We propose a concise yet effective gradient strategy called Mirror Gradient (MG)
We find that the proposed MG can complement existing robust training methods and be easily extended to diverse advanced recommendation models.
arXiv Detail & Related papers (2024-02-17T12:27:30Z) - When Parameter-efficient Tuning Meets General-purpose Vision-language
Models [65.19127815275307]
PETAL revolutionizes the training process by requiring only 0.5% of the total parameters, achieved through a unique mode approximation technique.
Our experiments reveal that PETAL not only outperforms current state-of-the-art methods in most scenarios but also surpasses full fine-tuning models in effectiveness.
arXiv Detail & Related papers (2023-12-16T17:13:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.