Interpretable Deep Learning for Forecasting Online Advertising Costs: Insights from the Competitive Bidding Landscape
- URL: http://arxiv.org/abs/2302.05762v2
- Date: Wed, 21 Aug 2024 14:18:34 GMT
- Title: Interpretable Deep Learning for Forecasting Online Advertising Costs: Insights from the Competitive Bidding Landscape
- Authors: Fynn Oldenburg, Qiwei Han, Maximilian Kaiser,
- Abstract summary: This paper presents a comprehensive study that employs various time-series forecasting methods to predict daily average CPC in the online advertising market.
We evaluate the performance of statistical models, machine learning techniques, and deep learning approaches, including the Temporal Fusion Transformer (TFT)
- Score: 1.0923877073891446
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As advertisers increasingly shift their budgets toward digital advertising, accurately forecasting advertising costs becomes essential for optimizing marketing campaign returns. This paper presents a comprehensive study that employs various time-series forecasting methods to predict daily average CPC in the online advertising market. We evaluate the performance of statistical models, machine learning techniques, and deep learning approaches, including the Temporal Fusion Transformer (TFT). Our findings reveal that incorporating multivariate models, enriched with covariates derived from competitors' CPC patterns through time-series clustering, significantly improves forecasting accuracy. We interpret the results by analyzing feature importance and temporal attention, demonstrating how the models leverage both the advertiser's data and insights from the competitive landscape. Additionally, our method proves robust during major market shifts, such as the COVID-19 pandemic, consistently outperforming models that rely solely on individual advertisers' data. This study introduces a scalable technique for selecting relevant covariates from a broad pool of advertisers, offering more accurate long-term forecasts and strategic insights into budget allocation and competitive dynamics in digital advertising.
Related papers
- Trading through Earnings Seasons using Self-Supervised Contrastive Representation Learning [1.6574413179773761]
Contrastive Earnings Transformer (CET) is a self-supervised learning approach rooted in Contrastive Predictive Coding (CPC)
Our research delves deep into the intricacies of stock data, evaluating how various models handle the rapidly changing relevance of earnings data over time and over different sectors.
CET's foundation on CPC allows for a nuanced understanding, facilitating consistent stock predictions even as the earnings data ages.
arXiv Detail & Related papers (2024-09-25T22:09:59Z) - Learning Augmentation Policies from A Model Zoo for Time Series Forecasting [58.66211334969299]
We introduce AutoTSAug, a learnable data augmentation method based on reinforcement learning.
By augmenting the marginal samples with a learnable policy, AutoTSAug substantially improves forecasting performance.
arXiv Detail & Related papers (2024-09-10T07:34:19Z) - F-FOMAML: GNN-Enhanced Meta-Learning for Peak Period Demand Forecasting with Proxy Data [65.6499834212641]
We formulate the demand prediction as a meta-learning problem and develop the Feature-based First-Order Model-Agnostic Meta-Learning (F-FOMAML) algorithm.
By considering domain similarities through task-specific metadata, our model improved generalization, where the excess risk decreases as the number of training tasks increases.
Compared to existing state-of-the-art models, our method demonstrates a notable improvement in demand prediction accuracy, reducing the Mean Absolute Error by 26.24% on an internal vending machine dataset and by 1.04% on the publicly accessible JD.com dataset.
arXiv Detail & Related papers (2024-06-23T21:28:50Z) - A Comparative Study on Enhancing Prediction in Social Network Advertisement through Data Augmentation [0.6707149143800017]
This study presents and explores a generative augmentation framework of social network advertising data.
Our framework explores three generative models for data augmentation - Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs), and Gaussian Mixture Models (GMMs)
arXiv Detail & Related papers (2024-04-22T01:16:11Z) - Structured Dynamic Pricing: Optimal Regret in a Global Shrinkage Model [50.06663781566795]
We consider a dynamic model with the consumers' preferences as well as price sensitivity varying over time.
We measure the performance of a dynamic pricing policy via regret, which is the expected revenue loss compared to a clairvoyant that knows the sequence of model parameters in advance.
Our regret analysis results not only demonstrate optimality of the proposed policy but also show that for policy planning it is essential to incorporate available structural information.
arXiv Detail & Related papers (2023-03-28T00:23:23Z) - Estimating defection in subscription-type markets: empirical analysis
from the scholarly publishing industry [0.0]
We present the first empirical study on customer churn prediction in the scholarly publishing industry.
The study examines our proposed method for prediction on a customer subscription data over a period of 6.5 years.
We show that this approach can be both accurate as well as uniquely useful in the business-to-business context.
arXiv Detail & Related papers (2022-11-18T01:29:51Z) - An Empirical Study on Distribution Shift Robustness From the Perspective
of Pre-Training and Data Augmentation [91.62129090006745]
This paper studies the distribution shift problem from the perspective of pre-training and data augmentation.
We provide the first comprehensive empirical study focusing on pre-training and data augmentation.
arXiv Detail & Related papers (2022-05-25T13:04:53Z) - Approaching sales forecasting using recurrent neural networks and
transformers [57.43518732385863]
We develop three alternatives to tackle the problem of forecasting the customer sales at day/store/item level using deep learning techniques.
Our empirical results show how good performance can be achieved by using a simple sequence to sequence architecture with minimal data preprocessing effort.
The proposed solution achieves a RMSLE of around 0.54, which is competitive with other more specific solutions to the problem proposed in the Kaggle competition.
arXiv Detail & Related papers (2022-04-16T12:03:52Z) - A Unified Framework for Campaign Performance Forecasting in Online
Display Advertising [9.005665883444902]
Interpretable and accurate results could enable advertisers to manage and optimize their campaign criteria.
New framework reproduces campaign performance on historical logs under various bidding types with a unified replay algorithm.
Method captures mixture calibration patterns among related forecast indicators to map the estimated results to the true ones.
arXiv Detail & Related papers (2022-02-24T03:04:29Z) - Test-time Collective Prediction [73.74982509510961]
Multiple parties in machine learning want to jointly make predictions on future test points.
Agents wish to benefit from the collective expertise of the full set of agents, but may not be willing to release their data or model parameters.
We explore a decentralized mechanism to make collective predictions at test time, leveraging each agent's pre-trained model.
arXiv Detail & Related papers (2021-06-22T18:29:58Z) - Profit-oriented sales forecasting: a comparison of forecasting
techniques from a business perspective [3.613072342189595]
This paper compares a large array of techniques for 35 times series that consist of both industry data from the Coca-Cola Company and publicly available datasets.
It introduces a novel and completely automated profit-driven approach that takes into account the expected profit that a technique can create during both the model building and evaluation process.
arXiv Detail & Related papers (2020-02-03T14:50:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.