Robust Learning of Optimal Auctions
- URL: http://arxiv.org/abs/2107.06259v1
- Date: Tue, 13 Jul 2021 17:37:21 GMT
- Title: Robust Learning of Optimal Auctions
- Authors: Wenshuo Guo, Michael I. Jordan, Manolis Zampetakis
- Abstract summary: We study the problem of learning revenue-optimal multi-bidder auctions from samples when the samples of bidders' valuations can be adversarially corrupted or drawn from distributions that are adversarially perturbed.
We propose new algorithms that can learn a mechanism whose revenue is nearly optimal simultaneously for all true distributions'' that are $alpha$-close to the original distribution in Kolmogorov-Smirnov distance.
- Score: 84.13356290199603
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the problem of learning revenue-optimal multi-bidder auctions from
samples when the samples of bidders' valuations can be adversarially corrupted
or drawn from distributions that are adversarially perturbed. First, we prove
tight upper bounds on the revenue we can obtain with a corrupted distribution
under a population model, for both regular valuation distributions and
distributions with monotone hazard rate (MHR). We then propose new algorithms
that, given only an ``approximate distribution'' for the bidder's valuation,
can learn a mechanism whose revenue is nearly optimal simultaneously for all
``true distributions'' that are $\alpha$-close to the original distribution in
Kolmogorov-Smirnov distance. The proposed algorithms operate beyond the setting
of bounded distributions that have been studied in prior works, and are
guaranteed to obtain a fraction $1-O(\alpha)$ of the optimal revenue under the
true distribution when the distributions are MHR. Moreover, they are guaranteed
to yield at least a fraction $1-O(\sqrt{\alpha})$ of the optimal revenue when
the distributions are regular. We prove that these upper bounds cannot be
further improved, by providing matching lower bounds. Lastly, we derive sample
complexity upper bounds for learning a near-optimal auction for both MHR and
regular distributions.
Related papers
- Efficiently learning and sampling multimodal distributions with data-based initialization [20.575122468674536]
We consider the problem of sampling a multimodal distribution with a Markov chain given a small number of samples from the stationary measure.
We show that if the Markov chain has a $k$th order spectral gap, samples from the stationary distribution will efficiently generate a sample whose conditional law is $varepsilon$-close in TV distance to the stationary measure.
arXiv Detail & Related papers (2024-11-14T01:37:02Z) - Gradual Domain Adaptation via Manifold-Constrained Distributionally Robust Optimization [0.4732176352681218]
This paper addresses the challenge of gradual domain adaptation within a class of manifold-constrained data distributions.
We propose a methodology rooted in Distributionally Robust Optimization (DRO) with an adaptive Wasserstein radius.
Our bounds rely on a newly introduced it compatibility measure, which fully characterizes the error propagation dynamics along the sequence.
arXiv Detail & Related papers (2024-10-17T22:07:25Z) - Theory on Score-Mismatched Diffusion Models and Zero-Shot Conditional Samplers [49.97755400231656]
We present the first performance guarantee with explicit dimensional general score-mismatched diffusion samplers.
We show that score mismatches result in an distributional bias between the target and sampling distributions, proportional to the accumulated mismatch between the target and training distributions.
This result can be directly applied to zero-shot conditional samplers for any conditional model, irrespective of measurement noise.
arXiv Detail & Related papers (2024-10-17T16:42:12Z) - New Classes of the Greedy-Applicable Arm Feature Distributions in the Sparse Linear Bandit Problem [34.51168440208439]
We consider the sparse contextual bandit problem where arm feature affects reward through the inner product of sparse parameters.
Recent studies have developed sparsity-agnostic algorithms based on the greedy arm selection policy.
arXiv Detail & Related papers (2023-12-19T18:35:33Z) - Stochastic Approximation Approaches to Group Distributionally Robust Optimization and Beyond [89.72693227960274]
This paper investigates group distributionally robust optimization (GDRO) with the goal of learning a model that performs well over $m$ different distributions.
To reduce the number of samples in each round from $m$ to 1, we cast GDRO as a two-player game, where one player conducts and the other executes an online algorithm for non-oblivious multi-armed bandits.
In the second scenario, we propose to optimize the average top-$k$ risk instead of the maximum risk, thereby mitigating the impact of distributions.
arXiv Detail & Related papers (2023-02-18T09:24:15Z) - On Best-Arm Identification with a Fixed Budget in Non-Parametric
Multi-Armed Bandits [0.0]
We consider general, possibly non-parametric, models D for distributions over the arms.
We propose upper bounds on the average log-probability of misidentifying the optimal arm based on information-theoretic quantities.
arXiv Detail & Related papers (2022-09-30T10:55:40Z) - Cooperative Distribution Alignment via JSD Upper Bound [7.071749623370137]
Unsupervised distribution alignment estimates a transformation that maps two or more source distributions to a shared aligned distribution.
This task has many applications including generative modeling, unsupervised domain adaptation, and socially aware learning.
We propose to unify and generalize previous flow-based approaches under a single non-adversarial framework.
arXiv Detail & Related papers (2022-07-05T20:09:03Z) - Partial and Asymmetric Contrastive Learning for Out-of-Distribution
Detection in Long-Tailed Recognition [80.07843757970923]
We show that existing OOD detection methods suffer from significant performance degradation when the training set is long-tail distributed.
We propose Partial and Asymmetric Supervised Contrastive Learning (PASCL), which explicitly encourages the model to distinguish between tail-class in-distribution samples and OOD samples.
Our method outperforms previous state-of-the-art method by $1.29%$, $1.45%$, $0.69%$ anomaly detection false positive rate (FPR) and $3.24%$, $4.06%$, $7.89%$ in-distribution
arXiv Detail & Related papers (2022-07-04T01:53:07Z) - Distributional Reinforcement Learning via Moment Matching [54.16108052278444]
We formulate a method that learns a finite set of statistics from each return distribution via neural networks.
Our method can be interpreted as implicitly matching all orders of moments between a return distribution and its Bellman target.
Experiments on the suite of Atari games show that our method outperforms the standard distributional RL baselines.
arXiv Detail & Related papers (2020-07-24T05:18:17Z) - Distributionally Robust Bayesian Quadrature Optimization [60.383252534861136]
We study BQO under distributional uncertainty in which the underlying probability distribution is unknown except for a limited set of its i.i.d. samples.
A standard BQO approach maximizes the Monte Carlo estimate of the true expected objective given the fixed sample set.
We propose a novel posterior sampling based algorithm, namely distributionally robust BQO (DRBQO) for this purpose.
arXiv Detail & Related papers (2020-01-19T12:00:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.