Do not Waste Money on Advertising Spend: Bid Recommendation via
Concavity Changes
- URL: http://arxiv.org/abs/2212.13923v1
- Date: Mon, 26 Dec 2022 08:32:41 GMT
- Title: Do not Waste Money on Advertising Spend: Bid Recommendation via
Concavity Changes
- Authors: Deguang Kong, Konstantin Shmakov and Jian Yang
- Abstract summary: In computational advertising, a challenging problem is how to recommend a bid for advertisers to achieve the best return on investment.
This paper presents the concavity given budget constraint scenario that changes in click revenue.
- Score: 19.857681941728597
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In computational advertising, a challenging problem is how to recommend the
bid for advertisers to achieve the best return on investment (ROI) given budget
constraint. This paper presents a bid recommendation scenario that discovers
the concavity changes in click prediction curves. The recommended bid is
derived based on the turning point from significant increase (i.e. concave
downward) to slow increase (convex upward). Parametric learning based method is
applied by solving the corresponding constraint optimization problem. Empirical
studies on real-world advertising scenarios clearly demonstrate the performance
gains for business metrics (including revenue increase, click increase and
advertiser ROI increase).
Related papers
- Optimizing Search Advertising Strategies: Integrating Reinforcement Learning with Generalized Second-Price Auctions for Enhanced Ad Ranking and Bidding [36.74368014856906]
We propose a model that adjusts to varying user interactions and optimize the balance between advertiser cost, user relevance, and platform revenue.
Our results suggest significant improvements in ad placement accuracy and cost efficiency, demonstrating the model's applicability in real-world scenarios.
arXiv Detail & Related papers (2024-05-22T06:30:55Z) - Model-Based Epistemic Variance of Values for Risk-Aware Policy Optimization [59.758009422067]
We consider the problem of quantifying uncertainty over expected cumulative rewards in model-based reinforcement learning.
We propose a new uncertainty Bellman equation (UBE) whose solution converges to the true posterior variance over values.
We introduce a general-purpose policy optimization algorithm, Q-Uncertainty Soft Actor-Critic (QU-SAC) that can be applied for either risk-seeking or risk-averse policy optimization.
arXiv Detail & Related papers (2023-12-07T15:55:58Z) - Rate-Optimal Policy Optimization for Linear Markov Decision Processes [65.5958446762678]
We obtain rate-optimal $widetilde O (sqrt K)$ regret where $K$ denotes the number of episodes.
Our work is the first to establish the optimal (w.r.t.$K$) rate of convergence in the setting with bandit feedback.
No algorithm with an optimal rate guarantee is currently known.
arXiv Detail & Related papers (2023-08-28T15:16:09Z) - Improved Regret for Efficient Online Reinforcement Learning with Linear
Function Approximation [69.0695698566235]
We study reinforcement learning with linear function approximation and adversarially changing cost functions.
We present a computationally efficient policy optimization algorithm for the challenging general setting of unknown dynamics and bandit feedback.
arXiv Detail & Related papers (2023-01-30T17:26:39Z) - Demystifying Advertising Campaign Bid Recommendation: A Constraint
target CPA Goal Optimization [19.857681941728597]
This paper presents a bid optimization scenario to achieve the desired cost-per-acquisition (tCPA) goals for advertisers.
We build the optimization engine to make a decision by solving the rigorously formalized constrained optimization problem.
The proposed model can naturally recommend the bid that meets the advertisers' expectations by making inference over advertisers' historical auction behaviors.
arXiv Detail & Related papers (2022-12-26T07:43:26Z) - Adaptive Risk-Aware Bidding with Budget Constraint in Display
Advertising [47.14651340748015]
We propose a novel adaptive risk-aware bidding algorithm with budget constraint via reinforcement learning.
We theoretically unveil the intrinsic relation between the uncertainty and the risk tendency based on value at risk (VaR)
arXiv Detail & Related papers (2022-12-06T18:50:09Z) - Functional Optimization Reinforcement Learning for Real-Time Bidding [14.5826735379053]
Real-time bidding is the new paradigm of programmatic advertising.
Existing approaches are struggling to provide a satisfactory solution for bidding optimization.
This paper proposes a multi-agent reinforcement learning architecture for RTB with functional optimization.
arXiv Detail & Related papers (2022-06-25T06:12:17Z) - Bid Optimization using Maximum Entropy Reinforcement Learning [0.3149883354098941]
This paper focuses on optimizing a single advertiser's bidding strategy using reinforcement learning (RL) in real-time bidding (RTB)
We first utilize a widely accepted linear bidding function to compute every impression's base price and optimize it by a mutable adjustment factor derived from the RTB auction environment.
Finally, the empirical study on a public dataset demonstrates that the proposed bidding strategy has superior performance compared with the baselines.
arXiv Detail & Related papers (2021-10-11T06:53:53Z) - Dynamic Knapsack Optimization Towards Efficient Multi-Channel Sequential
Advertising [52.3825928886714]
We formulate the sequential advertising strategy optimization as a dynamic knapsack problem.
We propose a theoretically guaranteed bilevel optimization framework, which significantly reduces the solution space of the original optimization space.
To improve the exploration efficiency of reinforcement learning, we also devise an effective action space reduction approach.
arXiv Detail & Related papers (2020-06-29T18:50:35Z) - Online Joint Bid/Daily Budget Optimization of Internet Advertising
Campaigns [115.96295568115251]
We study the problem of automating the online joint bid/daily budget optimization of pay-per-click advertising campaigns over multiple channels.
For every campaign, we capture the dependency of the number of clicks on the bid and daily budget by Gaussian Processes.
We design four algorithms and show that they suffer from a regret that is upper bounded with high probability as O(sqrtT)
We present the results of the adoption of our algorithms in a real-world application with a daily average spent of 1,000 Euros for more than one year.
arXiv Detail & Related papers (2020-03-03T11:07:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.