Learning in Repeated Multi-Unit Pay-As-Bid Auctions
- URL: http://arxiv.org/abs/2307.15193v2
- Date: Mon, 15 Jul 2024 19:58:56 GMT
- Title: Learning in Repeated Multi-Unit Pay-As-Bid Auctions
- Authors: Rigel Galgana, Negin Golrezaei,
- Abstract summary: We consider the problem of learning how to bid in repeated multi-unit pay-as-bid auctions.
The problem of learning how to bid in pay-as-bid auctions is challenging according to the nature of the action space.
We show that the optimal solution to the offline problem can be obtained using a time dynamic programming scheme.
- Score: 3.6294895527930504
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Motivated by Carbon Emissions Trading Schemes, Treasury Auctions, and Procurement Auctions, which all involve the auctioning of homogeneous multiple units, we consider the problem of learning how to bid in repeated multi-unit pay-as-bid auctions. In each of these auctions, a large number of (identical) items are to be allocated to the largest submitted bids, where the price of each of the winning bids is equal to the bid itself. The problem of learning how to bid in pay-as-bid auctions is challenging due to the combinatorial nature of the action space. We overcome this challenge by focusing on the offline setting, where the bidder optimizes their vector of bids while only having access to the past submitted bids by other bidders. We show that the optimal solution to the offline problem can be obtained using a polynomial time dynamic programming (DP) scheme. We leverage the structure of the DP scheme to design online learning algorithms with polynomial time and space complexity under full information and bandit feedback settings. We achieve an upper bound on regret of $O(M\sqrt{T\log |\mathcal{B}|})$ and $O(M\sqrt{|\mathcal{B}|T\log |\mathcal{B}|})$ respectively, where $M$ is the number of units demanded by the bidder, $T$ is the total number of auctions, and $|\mathcal{B}|$ is the size of the discretized bid space. We accompany these results with a regret lower bound, which match the linear dependency in $M$. Our numerical results suggest that when all agents behave according to our proposed no regret learning algorithms, the resulting market dynamics mainly converge to a welfare maximizing equilibrium where bidders submit uniform bids. Lastly, our experiments demonstrate that the pay-as-bid auction consistently generates significantly higher revenue compared to its popular alternative, the uniform price auction.
Related papers
- Learning to Coordinate Bidders in Non-Truthful Auctions [6.3923058661534276]
We study the complexity of learning Bayes correlated equilibria in non-truthful auctions.<n>We prove that the BCEs can be learned with a number $tilde O(fracnvareps2)$ of samples from bidders' value.
arXiv Detail & Related papers (2025-07-03T17:03:14Z) - Nash Equilibrium Constrained Auto-bidding With Bi-level Reinforcement Learning [64.2367385090879]
We propose a new formulation of the auto-bidding problem from the platform's perspective.
It aims to maximize the social welfare of all advertisers under the $epsilon$-NE constraint.
The NCB problem presents significant challenges due to its constrained bi-level structure and the typically large number of advertisers involved.
arXiv Detail & Related papers (2025-03-13T12:25:36Z) - Improved learning rates in multi-unit uniform price auctions [20.8319469276025]
We study the problem of online learning in repeated multi-unit uniform price auctions focusing on the adversarial opposing bid setting.
We prove that a learning algorithm leveraging the structure of this problem achieves a regret of $tildeO(K4/3T2/3)$ under bandit feedback.
Inspired by electricity reserve markets, we introduce a different feedback model under which all winning bids are revealed.
arXiv Detail & Related papers (2025-01-17T13:26:12Z) - Procurement Auctions via Approximately Optimal Submodular Optimization [53.93943270902349]
We study procurement auctions, where an auctioneer seeks to acquire services from strategic sellers with private costs.
Our goal is to design computationally efficient auctions that maximize the difference between the quality of the acquired services and the total cost of the sellers.
arXiv Detail & Related papers (2024-11-20T18:06:55Z) - Randomized Truthful Auctions with Learning Agents [10.39657928150242]
We study a setting where agents use no-regret learning to participate in repeated auctions.
We show that when bidders participate in second-price auctions using no-regret bidding algorithms, the runner-up bidder may not converge to bidding truthfully.
We define a notion of em auctioneer regret comparing the revenue generated to the revenue of a second price auction with bids.
arXiv Detail & Related papers (2024-11-14T15:28:40Z) - Fair Allocation in Dynamic Mechanism Design [57.66441610380448]
We consider a problem where an auctioneer sells an indivisible good to groups of buyers in every round, for a total of $T$ rounds.
The auctioneer aims to maximize their discounted overall revenue while adhering to a fairness constraint that guarantees a minimum average allocation for each group.
arXiv Detail & Related papers (2024-05-31T19:26:05Z) - No-Regret Algorithms in non-Truthful Auctions with Budget and ROI Constraints [0.9694940903078658]
We study the problem of designing online autobidding algorithms to optimize value subject to ROI and budget constraints.
Our main result is an algorithm with full information feedback that guarantees a near-optimal $tilde O(sqrt T)$ regret with respect to the best Lipschitz function.
arXiv Detail & Related papers (2024-04-15T14:31:53Z) - Combinatorial Stochastic-Greedy Bandit [79.1700188160944]
We propose a novelgreedy bandit (SGB) algorithm for multi-armed bandit problems when no extra information other than the joint reward of the selected set of $n$ arms at each time $tin [T]$ is observed.
SGB adopts an optimized-explore-then-commit approach and is specifically designed for scenarios with a large set of base arms.
arXiv Detail & Related papers (2023-12-13T11:08:25Z) - Learning and Collusion in Multi-unit Auctions [17.727436775513368]
We consider repeated multi-unit auctions with uniform pricing.
We analyze the properties of this auction in both the offline and online settings.
We show that the $(K+1)$-st price format is susceptible to collusion among the bidders.
arXiv Detail & Related papers (2023-05-27T08:00:49Z) - Autobidders with Budget and ROI Constraints: Efficiency, Regret, and Pacing Dynamics [53.62091043347035]
We study a game between autobidding algorithms that compete in an online advertising platform.
We propose a gradient-based learning algorithm that is guaranteed to satisfy all constraints and achieves vanishing individual regret.
arXiv Detail & Related papers (2023-01-30T21:59:30Z) - A Reinforcement Learning Approach in Multi-Phase Second-Price Auction
Design [158.0041488194202]
We study reserve price optimization in multi-phase second price auctions.
From the seller's perspective, we need to efficiently explore the environment in the presence of potentially nontruthful bidders.
Third, the seller's per-step revenue is unknown, nonlinear, and cannot even be directly observed from the environment.
arXiv Detail & Related papers (2022-10-19T03:49:05Z) - ProportionNet: Balancing Fairness and Revenue for Auction Design with
Deep Learning [55.76903822619047]
We study the design of revenue-maximizing auctions with strong incentive guarantees.
We extend techniques for approximating auctions using deep learning to address concerns of fairness while maintaining high revenue and strong incentive guarantees.
arXiv Detail & Related papers (2020-10-13T13:54:21Z) - Optimal No-regret Learning in Repeated First-price Auctions [38.908235632001116]
We study online learning in repeated first-price auctions.
We develop the first learning algorithm that achieves a near-optimal $widetildeO(sqrtT)$ regret bound.
arXiv Detail & Related papers (2020-03-22T03:32:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.