Predict Click-Through Rates with Deep Interest Network Model in E-commerce Advertising
- URL: http://arxiv.org/abs/2406.10239v1
- Date: Tue, 4 Jun 2024 05:52:14 GMT
- Title: Predict Click-Through Rates with Deep Interest Network Model in E-commerce Advertising
- Authors: Chang Zhou, Yang Zhao, Yuelin Zou, Jin Cao, Wenhan Fan, Yi Zhao, Chiyu Cheng,
- Abstract summary: This paper proposes new methods to enhance click-through rate (CTR) prediction models using the Deep Interest Network (DIN) model.
This research focuses on localized user behavior activation for tailored ad targeting by leveraging extensive user behavior data.
- Score: 36.61520168259678
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper proposes new methods to enhance click-through rate (CTR) prediction models using the Deep Interest Network (DIN) model, specifically applied to the advertising system of Alibaba's Taobao platform. Unlike traditional deep learning approaches, this research focuses on localized user behavior activation for tailored ad targeting by leveraging extensive user behavior data. Compared to traditional models, this method demonstrates superior ability to handle diverse and dynamic user data, thereby improving the efficiency of ad systems and increasing revenue.
Related papers
- Knowledge Editing in Language Models via Adapted Direct Preference Optimization [50.616875565173274]
Large Language Models (LLMs) can become outdated over time.
Knowledge Editing aims to overcome this challenge using weight updates that do not require expensive retraining.
arXiv Detail & Related papers (2024-06-14T11:02:21Z) - Optimizing Search Advertising Strategies: Integrating Reinforcement Learning with Generalized Second-Price Auctions for Enhanced Ad Ranking and Bidding [36.74368014856906]
We propose a model that adjusts to varying user interactions and optimize the balance between advertiser cost, user relevance, and platform revenue.
Our results suggest significant improvements in ad placement accuracy and cost efficiency, demonstrating the model's applicability in real-world scenarios.
arXiv Detail & Related papers (2024-05-22T06:30:55Z) - Dynamic collaborative filtering Thompson Sampling for cross-domain
advertisements recommendation [1.6859861406758752]
We propose dynamic collaborative filtering Thompson Sampling (DCTS) to transfer knowledge among bandit models.
DCTS exploits similarities between users and between ads to estimate a prior distribution of Thompson sampling.
We show that DCTS improves click-through rate by 9.7% than the state-of-the-art models.
arXiv Detail & Related papers (2022-08-25T08:13:24Z) - Preference Enhanced Social Influence Modeling for Network-Aware Cascade
Prediction [59.221668173521884]
We propose a novel framework to promote cascade size prediction by enhancing the user preference modeling.
Our end-to-end method makes the user activating process of information diffusion more adaptive and accurate.
arXiv Detail & Related papers (2022-04-18T09:25:06Z) - Deep Page-Level Interest Network in Reinforcement Learning for Ads
Allocation [14.9065245548275]
We propose Deep Page-level Interest Network (DPIN) to model the page-level user preference and exploit multiple types of feedback.
Specifically, we introduce four different types of page-level feedback as input, and capture user preference for item arrangement under different receptive fields.
arXiv Detail & Related papers (2022-04-01T11:58:00Z) - Reinforcement Learning based Path Exploration for Sequential Explainable
Recommendation [57.67616822888859]
We propose a novel Temporal Meta-path Guided Explainable Recommendation leveraging Reinforcement Learning (TMER-RL)
TMER-RL utilizes reinforcement item-item path modelling between consecutive items with attention mechanisms to sequentially model dynamic user-item evolutions on dynamic knowledge graph for explainable recommendation.
Extensive evaluations of TMER on two real-world datasets show state-of-the-art performance compared against recent strong baselines.
arXiv Detail & Related papers (2021-11-24T04:34:26Z) - Dynamic Parameterized Network for CTR Prediction [6.749659219776502]
We proposed a novel plug-in operation, Dynamic ized Operation (DPO), to learn both explicit and implicit interaction instance-wisely.
We showed that the introduction of DPO into DNN modules and Attention modules can respectively benefit two main tasks in click-through rate (CTR) prediction.
Our Dynamic ized Networks significantly outperforms state-of-the-art methods in the offline experiments on the public dataset and real-world production dataset.
arXiv Detail & Related papers (2021-11-09T08:15:03Z) - Generative Adversarial Reward Learning for Generalized Behavior Tendency
Inference [71.11416263370823]
We propose a generative inverse reinforcement learning for user behavioral preference modelling.
Our model can automatically learn the rewards from user's actions based on discriminative actor-critic network and Wasserstein GAN.
arXiv Detail & Related papers (2021-05-03T13:14:25Z) - Trajectory-wise Multiple Choice Learning for Dynamics Generalization in
Reinforcement Learning [137.39196753245105]
We present a new model-based reinforcement learning algorithm that learns a multi-headed dynamics model for dynamics generalization.
We incorporate context learning, which encodes dynamics-specific information from past experiences into the context latent vector.
Our method exhibits superior zero-shot generalization performance across a variety of control tasks, compared to state-of-the-art RL methods.
arXiv Detail & Related papers (2020-10-26T03:20:42Z) - TPG-DNN: A Method for User Intent Prediction Based on Total Probability
Formula and GRU Loss with Multi-task Learning [36.38658213969406]
We propose a novel user intent prediction model, TPG-DNN, to complete the challenging task.
The proposed model has been widely used for the coupon allocation, advertisement and recommendation on Taobao platform.
arXiv Detail & Related papers (2020-08-05T13:25:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.