ADSNet: Cross-Domain LTV Prediction with an Adaptive Siamese Network in Advertising
- URL: http://arxiv.org/abs/2406.10517v1
- Date: Sat, 15 Jun 2024 06:04:46 GMT
- Title: ADSNet: Cross-Domain LTV Prediction with an Adaptive Siamese Network in Advertising
- Authors: Ruize Wang, Hui Xu, Ying Cheng, Qi He, Xing Zhou, Rui Feng, Wei Xu, Lei Huang, Jie Jiang,
- Abstract summary: Advertising platforms have evolved in estimating Lifetime Value (LTV) to better align with advertisers' true performance metric.
Sparsity of real-world LTV data presents a significant challenge to LTV predictive model.
We propose to utilize external data to expand the size of purchase samples and enhance the LTV prediction model.
- Score: 28.894933598821527
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Advertising platforms have evolved in estimating Lifetime Value (LTV) to better align with advertisers' true performance metric. However, the sparsity of real-world LTV data presents a significant challenge to LTV predictive model(i.e., pLTV), severely limiting the their capabilities. Therefore, we propose to utilize external data, in addition to the internal data of advertising platform, to expand the size of purchase samples and enhance the LTV prediction model of the advertising platform. To tackle the issue of data distribution shift between internal and external platforms, we introduce an Adaptive Difference Siamese Network (ADSNet), which employs cross-domain transfer learning to prevent negative transfer. Specifically, ADSNet is designed to learn information that is beneficial to the target domain. We introduce a gain evaluation strategy to calculate information gain, aiding the model in learning helpful information for the target domain and providing the ability to reject noisy samples, thus avoiding negative transfer. Additionally, we also design a Domain Adaptation Module as a bridge to connect different domains, reduce the distribution distance between them, and enhance the consistency of representation space distribution. We conduct extensive offline experiments and online A/B tests on a real advertising platform. Our proposed ADSNet method outperforms other methods, improving GINI by 2$\%$. The ablation study highlights the importance of the gain evaluation strategy in negative gain sample rejection and improving model performance. Additionally, ADSNet significantly improves long-tail prediction. The online A/B tests confirm ADSNet's efficacy, increasing online LTV by 3.47$\%$ and GMV by 3.89$\%$.
Related papers
- Connecting Domains and Contrasting Samples: A Ladder for Domain Generalization [52.52838658375592]
We propose a new paradigm, domain-connecting contrastive learning (DCCL) to enhance conceptual connectivity across domains.<n>On the data side, more aggressive data augmentation and cross-domain positive samples are introduced to improve intra-class connectivity.<n>The results verify that DCCL outperforms state-of-the-art baselines even without domain supervision.
arXiv Detail & Related papers (2025-10-19T04:13:29Z) - Mobile Gamer Lifetime Value Prediction via Objective Decomposition and Reconstruction [9.270686163643672]
In this paper, we propose a novel LTV prediction method to address distribution challenges through an objective decomposition and reconstruction framework.<n>Based on the in-app purchase characteristics of mobile gamers, our model was designed to first predict the number of transactions at specific prices and then calculate the total payment amount from these intermediate predictions.<n>Our proposed model was evaluated through experiments on real-world industrial dataset, and deployed on the TapTap RTB advertising system for online A/B testing along with the state-of-the-art ZILN model.
arXiv Detail & Related papers (2025-10-09T14:33:12Z) - Graph Neural Network Enhanced Sequential Recommendation Method for Cross-Platform Ad Campaign [7.527164593769052]
A graph neural network (GNN)-based advertisement recommendation method is analyzed.<n>User behavior data (e.g., click frequency, active duration) reveal temporal patterns of interest evolution.<n>Platform features (e.g., device type, usage context) shape the environment where interest transitions occur.
arXiv Detail & Related papers (2025-07-11T18:34:02Z) - External Evaluation of Discrimination Mitigation Efforts in Meta's Ad Delivery [47.418845064441605]
We show that the Variance Reduction System (VRS) implemented by Meta does not meaningfully improve access to opportunities for individuals.<n>We then conduct experiments to evaluate VRS with real-world ads, and show that while VRS does reduce variance, it also raises advertiser costs.<n>We show our approach outperforms VRS by both increasing ad exposure for users from all groups and reducing cost to advertisers.
arXiv Detail & Related papers (2025-06-19T19:29:26Z) - DIDS: Domain Impact-aware Data Sampling for Large Language Model Training [61.10643823069603]
We present Domain Impact-aware Data Sampling (DIDS) for large language models.<n>DIDS group training data based on learning effects, where a proxy language model and dimensionality reduction are employed.<n>It achieves 3.4% higher average performance while maintaining comparable training efficiency.
arXiv Detail & Related papers (2025-04-17T13:09:38Z) - External Large Foundation Model: How to Efficiently Serve Trillions of Parameters for Online Ads Recommendation [58.194356020695906]
Ads recommendation is a prominent service of online advertising systems and has been actively studied.
Recent studies indicate that scaling-up and advanced design of the recommendation model can bring significant performance improvement.
However, with a larger model scale, such prior studies have a significantly increasing gap from industry as they often neglect two fundamental challenges in industrial-scale applications.
arXiv Detail & Related papers (2025-02-20T22:35:52Z) - Learn from the Learnt: Source-Free Active Domain Adaptation via Contrastive Sampling and Visual Persistence [60.37934652213881]
Domain Adaptation (DA) facilitates knowledge transfer from a source domain to a related target domain.
This paper investigates a practical DA paradigm, namely Source data-Free Active Domain Adaptation (SFADA), where source data becomes inaccessible during adaptation.
We present learn from the learnt (LFTL), a novel paradigm for SFADA to leverage the learnt knowledge from the source pretrained model and actively iterated models without extra overhead.
arXiv Detail & Related papers (2024-07-26T17:51:58Z) - Contrastive Multi-view Framework for Customer Lifetime Value Prediction [48.24479287526052]
Many existing LTV prediction methods directly train a single-view LTV predictor on consumption samples.
We propose a contrastive multi-view framework for LTV prediction, which is a plug-and-play solution compatible with various backbone models.
We conduct extensive experiments on a real-world game LTV prediction dataset and the results validate the effectiveness of our method.
arXiv Detail & Related papers (2023-06-26T03:23:53Z) - Interpretable Deep Learning for Forecasting Online Advertising Costs: Insights from the Competitive Bidding Landscape [1.0923877073891446]
This paper presents a comprehensive study that employs various time-series forecasting methods to predict daily average CPC in the online advertising market.
We evaluate the performance of statistical models, machine learning techniques, and deep learning approaches, including the Temporal Fusion Transformer (TFT)
arXiv Detail & Related papers (2023-02-11T19:26:17Z) - VFed-SSD: Towards Practical Vertical Federated Advertising [53.08038962443853]
We propose a semi-supervised split distillation framework VFed-SSD to alleviate the two limitations.
Specifically, we develop a self-supervised task MatchedPair Detection (MPD) to exploit the vertically partitioned unlabeled data.
Our framework provides an efficient federation-enhanced solution for real-time display advertising with minimal deploying cost and significant performance lift.
arXiv Detail & Related papers (2022-05-31T17:45:30Z) - Arbitrary Distribution Modeling with Censorship in Real-Time Bidding
Advertising [2.562910030418378]
The purpose of Inventory Pricing is to bid the right prices to online ad opportunities, which is crucial for a Demand-Side Platform (DSP) to win auctions in Real-Time Bidding (RTB)
Most of the previous works made strong assumptions on the distribution form of the winning price, which reduced their accuracy and weakened their ability to make generalizations.
We propose a novel loss function, Neighborhood Likelihood Loss (NLL), collaborating with a proposed framework, Arbitrary Distribution Modeling (ADM) to predict the winning price distribution under censorship.
arXiv Detail & Related papers (2021-10-26T11:40:00Z) - Mid-flight Forecasting for CPA Lines in Online Advertising [6.766999405722559]
This paper investigates the forecasting problem for CPA lines in the middle of the flight.
The proposed methodology generates relationships between various key performance metrics and optimization signals.
The relationship between advertiser spends and effective Cost Per Action(eCPA) is also characterized.
arXiv Detail & Related papers (2021-07-15T17:48:15Z) - Negative Data Augmentation [127.28042046152954]
We show that negative data augmentation samples provide information on the support of the data distribution.
We introduce a new GAN training objective where we use NDA as an additional source of synthetic data for the discriminator.
Empirically, models trained with our method achieve improved conditional/unconditional image generation along with improved anomaly detection capabilities.
arXiv Detail & Related papers (2021-02-09T20:28:35Z) - Learning to Infer User Hidden States for Online Sequential Advertising [52.169666997331724]
We propose our Deep Intents Sequential Advertising (DISA) method to address these issues.
The key part of interpretability is to understand a consumer's purchase intent which is, however, unobservable (called hidden states)
arXiv Detail & Related papers (2020-09-03T05:12:26Z) - Provably Efficient Causal Reinforcement Learning with Confounded
Observational Data [135.64775986546505]
We study how to incorporate the dataset (observational data) collected offline, which is often abundantly available in practice, to improve the sample efficiency in the online setting.
We propose the deconfounded optimistic value iteration (DOVI) algorithm, which incorporates the confounded observational data in a provably efficient manner.
arXiv Detail & Related papers (2020-06-22T14:49:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.