Equilibrium Learning in Combinatorial Auctions: Computing Approximate
Bayesian Nash Equilibria via Pseudogradient Dynamics
- URL: http://arxiv.org/abs/2101.11946v1
- Date: Thu, 28 Jan 2021 11:53:32 GMT
- Title: Equilibrium Learning in Combinatorial Auctions: Computing Approximate
Bayesian Nash Equilibria via Pseudogradient Dynamics
- Authors: Stefan Heidekr\"uger, Paul Sutterer, Nils Kohring, Maximilian Fichtl,
and Martin Bichler
- Abstract summary: We present a generic yet scalable alternative multi-agent equilibrium learning method.
We observe fast and robust convergence to approximate BNE in a wide variety of auctions.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Applications of combinatorial auctions (CA) as market mechanisms are
prevalent in practice, yet their Bayesian Nash equilibria (BNE) remain poorly
understood. Analytical solutions are known only for a few cases where the
problem can be reformulated as a tractable partial differential equation (PDE).
In the general case, finding BNE is known to be computationally hard. Previous
work on numerical computation of BNE in auctions has relied either on solving
such PDEs explicitly, calculating pointwise best-responses in strategy space,
or iteratively solving restricted subgames. In this study, we present a generic
yet scalable alternative multi-agent equilibrium learning method that
represents strategies as neural networks and applies policy iteration based on
gradient dynamics in self-play. Most auctions are ex-post nondifferentiable, so
gradients may be unavailable or misleading, and we rely on suitable
pseudogradient estimates instead. Although it is well-known that gradient
dynamics cannot guarantee convergence to NE in general, we observe fast and
robust convergence to approximate BNE in a wide variety of auctions and present
a sufficient condition for convergence
Related papers
- Using Uncertainty Quantification to Characterize and Improve Out-of-Domain Learning for PDEs [44.890946409769924]
Neural Operators (NOs) have emerged as particularly promising quantification.
We show that ensembling several NOs can identify high-error regions and provide good uncertainty estimates.
We then introduce Operator-ProbConserv, a method that uses these well-calibrated UQ estimates within the ProbConserv framework to update the model.
arXiv Detail & Related papers (2024-03-15T19:21:27Z) - Uncertainty Quantification for Forward and Inverse Problems of PDEs via
Latent Global Evolution [110.99891169486366]
We propose a method that integrates efficient and precise uncertainty quantification into a deep learning-based surrogate model.
Our method endows deep learning-based surrogate models with robust and efficient uncertainty quantification capabilities for both forward and inverse problems.
Our method excels at propagating uncertainty over extended auto-regressive rollouts, making it suitable for scenarios involving long-term predictions.
arXiv Detail & Related papers (2024-02-13T11:22:59Z) - Exploiting hidden structures in non-convex games for convergence to Nash
equilibrium [62.88214569402201]
A wide array of modern machine learning applications can be formulated as non-cooperative Nashlibria.
We provide explicit convergence guarantees for both deterministic and deterministic environments.
arXiv Detail & Related papers (2023-12-27T15:21:25Z) - Learning to Optimize with Stochastic Dominance Constraints [103.26714928625582]
In this paper, we develop a simple yet efficient approach for the problem of comparing uncertain quantities.
We recast inner optimization in the Lagrangian as a learning problem for surrogate approximation, which bypasses apparent intractability.
The proposed light-SD demonstrates superior performance on several representative problems ranging from finance to supply chain management.
arXiv Detail & Related papers (2022-11-14T21:54:31Z) - An Exponentially Converging Particle Method for the Mixed Nash
Equilibrium of Continuous Games [0.0]
We consider the problem of computing mixed Nash equilibria of two-player zero-sum games with continuous sets of pure strategies and with first-order access to the payoff function.
This problem arises for example in game-inspired machine learning applications, such as distributionally-robust learning.
We introduce and analyze a particle-based method that enjoys guaranteed local convergence for this problem.
arXiv Detail & Related papers (2022-11-02T17:03:40Z) - On NeuroSymbolic Solutions for PDEs [12.968529838140036]
Physics Informed Neural Networks (PINNs) have gained immense popularity as an alternate method for numerically solving PDEs.
In this work we explore a NeuroSymbolic approach to approximate the solution for PDEs.
We show that the NeuroSymbolic approximations are consistently 1-2 order of magnitude better than just the neural or symbolic approximations.
arXiv Detail & Related papers (2022-07-11T16:04:20Z) - Clipped Stochastic Methods for Variational Inequalities with
Heavy-Tailed Noise [64.85879194013407]
We prove the first high-probability results with logarithmic dependence on the confidence level for methods for solving monotone and structured non-monotone VIPs.
Our results match the best-known ones in the light-tails case and are novel for structured non-monotone problems.
In addition, we numerically validate that the gradient noise of many practical formulations is heavy-tailed and show that clipping improves the performance of SEG/SGDA.
arXiv Detail & Related papers (2022-06-02T15:21:55Z) - Message Passing Neural PDE Solvers [60.77761603258397]
We build a neural message passing solver, replacing allally designed components in the graph with backprop-optimized neural function approximators.
We show that neural message passing solvers representationally contain some classical methods, such as finite differences, finite volumes, and WENO schemes.
We validate our method on various fluid-like flow problems, demonstrating fast, stable, and accurate performance across different domain topologies, equation parameters, discretizations, etc., in 1D and 2D.
arXiv Detail & Related papers (2022-02-07T17:47:46Z) - Semi-Implicit Neural Solver for Time-dependent Partial Differential
Equations [4.246966726709308]
We propose a neural solver to learn an optimal iterative scheme in a data-driven fashion for any class of PDEs.
We provide theoretical guarantees for the correctness and convergence of neural solvers analogous to conventional iterative solvers.
arXiv Detail & Related papers (2021-09-03T12:03:10Z) - Scalable Personalised Item Ranking through Parametric Density Estimation [53.44830012414444]
Learning from implicit feedback is challenging because of the difficult nature of the one-class problem.
Most conventional methods use a pairwise ranking approach and negative samplers to cope with the one-class problem.
We propose a learning-to-rank approach, which achieves convergence speed comparable to the pointwise counterpart.
arXiv Detail & Related papers (2021-05-11T03:38:16Z) - Bayesian neural networks for weak solution of PDEs with uncertainty
quantification [3.4773470589069473]
A new physics-constrained neural network (NN) approach is proposed to solve PDEs without labels.
We write the loss function of NNs based on the discretized residual of PDEs through an efficient, convolutional operator-based, and vectorized implementation.
We demonstrate the capability and performance of the proposed framework by applying it to steady-state diffusion, linear elasticity, and nonlinear elasticity.
arXiv Detail & Related papers (2021-01-13T04:57:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.