Safe Online Bid Optimization with Return-On-Investment and Budget
Constraints subject to Uncertainty
- URL: http://arxiv.org/abs/2201.07139v1
- Date: Tue, 18 Jan 2022 17:24:20 GMT
- Title: Safe Online Bid Optimization with Return-On-Investment and Budget
Constraints subject to Uncertainty
- Authors: Matteo Castiglioni, Alessandro Nuara, Giulia Romano, Giorgio Spadaro,
Francesco Trov\`o, Nicola Gatti
- Abstract summary: We study the nature of both the optimization and learning problems.
We provide an algorithm, namely GCB, guaranteeing sublinear regret at the cost of a potentially linear number of constraints violations.
More interestingly, we provide an algorithm, namely GCB_safe(psi,phi), guaranteeing both sublinear pseudo-regret and safety w.h.p. at the cost of accepting tolerances psi and phi.
- Score: 87.81197574939355
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In online marketing, the advertisers' goal is usually a tradeoff between
achieving high volumes and high profitability. The companies' business units
customarily address this tradeoff by maximizing the volumes while guaranteeing
a lower bound to the Return On Investment (ROI). This paper investigates
combinatorial bandit algorithms for the bid optimization of advertising
campaigns subject to uncertain budget and ROI constraints. We study the nature
of both the optimization and learning problems. In particular, when focusing on
the optimization problem without uncertainty, we show that it is inapproximable
within any factor unless P=NP, and we provide a pseudo-polynomial-time
algorithm that achieves an optimal solution. When considering uncertainty, we
prove that no online learning algorithm can violate the (ROI or budget)
constraints during the learning process a sublinear number of times while
guaranteeing a sublinear pseudo-regret. Thus, we provide an algorithm, namely
GCB, guaranteeing sublinear regret at the cost of a potentially linear number
of constraints violations. We also design its safe version, namely GCB_{safe},
guaranteeing w.h.p. a constant upper bound on the number of constraints
violations at the cost of a linear pseudo-regret. More interestingly, we
provide an algorithm, namely GCB_{safe}(\psi,\phi), guaranteeing both sublinear
pseudo-regret and safety w.h.p. at the cost of accepting tolerances \psi and
\phi in the satisfaction of the ROI and budget constraints, respectively. This
algorithm actually mitigates the risks due to the constraints violations
without precluding the convergence to the optimal solution. Finally, we
experimentally compare our algorithms in terms of
pseudo-regret/constraint-violation tradeoff in settings generated from
real-world data, showing the importance of adopting safety constraints in
practice and the effectiveness of our algorithms.
Related papers
- Best-of-Both-Worlds Policy Optimization for CMDPs with Bandit Feedback [34.7178680288326]
Stradi et al.(2024) proposed the first best-of-both-worlds algorithm for constrained Markov decision processes.
In this paper, we provide the first best-of-both-worlds algorithm for CMDPs with bandit feedback.
Our algorithm is based on a policy optimization approach, which is much more efficient than occupancy-measure-based methods.
arXiv Detail & Related papers (2024-10-03T07:44:40Z) - Learning Constrained Markov Decision Processes With Non-stationary Rewards and Constraints [34.7178680288326]
In constrained Markov decision processes (CMDPs) with adversarial rewards and constraints, a well-known impossibility result prevents any algorithm from attaining sublinear regret and sublinear constraint violation.
We show that this negative result can be eased in CMDPs with non-stationary rewards and constraints, by providing algorithms whose performances smoothly degrade as non-stationarity increases.
arXiv Detail & Related papers (2024-05-23T09:48:48Z) - LinearAPT: An Adaptive Algorithm for the Fixed-Budget Thresholding
Linear Bandit Problem [4.666048091337632]
We present LinearAPT, a novel algorithm designed for the fixed budget setting of the Thresholding Linear Bandit (TLB) problem.
Our contributions highlight the adaptability, simplicity, and computational efficiency of LinearAPT, making it a valuable addition to the toolkit for addressing complex sequential decision-making challenges.
arXiv Detail & Related papers (2024-03-10T15:01:50Z) - Variance-Dependent Regret Bounds for Linear Bandits and Reinforcement
Learning: Adaptivity and Computational Efficiency [90.40062452292091]
We present the first computationally efficient algorithm for linear bandits with heteroscedastic noise.
Our algorithm is adaptive to the unknown variance of noise and achieves an $tildeO(d sqrtsum_k = 1K sigma_k2 + d)$ regret.
We also propose a variance-adaptive algorithm for linear mixture Markov decision processes (MDPs) in reinforcement learning.
arXiv Detail & Related papers (2023-02-21T00:17:24Z) - Corruption-Robust Algorithms with Uncertainty Weighting for Nonlinear
Contextual Bandits and Markov Decision Processes [59.61248760134937]
We propose an efficient algorithm to achieve a regret of $tildeO(sqrtT+zeta)$.
The proposed algorithm relies on the recently developed uncertainty-weighted least-squares regression from linear contextual bandit.
We generalize our algorithm to the episodic MDP setting and first achieve an additive dependence on the corruption level $zeta$.
arXiv Detail & Related papers (2022-12-12T15:04:56Z) - A Unifying Framework for Online Optimization with Long-Term Constraints [62.35194099438855]
We study online learning problems in which a decision maker has to take a sequence of decisions subject to $m$ long-term constraints.
The goal is to maximize their total reward, while at the same time achieving small cumulative violation across the $T$ rounds.
We present the first best-of-both-world type algorithm for this general class problems, with no-regret guarantees both in the case in which rewards and constraints are selected according to an unknown model, and in the case in which they are selected at each round by an adversary.
arXiv Detail & Related papers (2022-09-15T16:59:19Z) - Log Barriers for Safe Black-box Optimization with Application to Safe
Reinforcement Learning [72.97229770329214]
We introduce a general approach for seeking high dimensional non-linear optimization problems in which maintaining safety during learning is crucial.
Our approach called LBSGD is based on applying a logarithmic barrier approximation with a carefully chosen step size.
We demonstrate the effectiveness of our approach on minimizing violation in policy tasks in safe reinforcement learning.
arXiv Detail & Related papers (2022-07-21T11:14:47Z) - Safe Online Convex Optimization with Unknown Linear Safety Constraints [0.0]
We study the problem of safe online convex optimization, where the action at each time step must satisfy a set of linear safety constraints.
The parameters that specify the linear safety constraints are unknown to the algorithm.
We show that, under the assumption of the availability of a safe baseline action, the SO-PGD algorithm achieves a regret $O(T2/3)$.
arXiv Detail & Related papers (2021-11-14T19:49:19Z) - Concave Utility Reinforcement Learning with Zero-Constraint Violations [43.29210413964558]
We consider the problem of concave utility reinforcement learning (CURL) with convex constraints.
We propose a model-based learning algorithm that also achieves zero constraint violations.
arXiv Detail & Related papers (2021-09-12T06:13:33Z) - Large-Scale Methods for Distributionally Robust Optimization [53.98643772533416]
We prove that our algorithms require a number of evaluations gradient independent of training set size and number of parameters.
Experiments on MNIST and ImageNet confirm the theoretical scaling of our algorithms, which are 9--36 times more efficient than full-batch methods.
arXiv Detail & Related papers (2020-10-12T17:41:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.