A Permutation-Equivariant Neural Network Architecture For Auction Design
- URL: http://arxiv.org/abs/2003.01497v4
- Date: Mon, 25 Oct 2021 15:41:56 GMT
- Title: A Permutation-Equivariant Neural Network Architecture For Auction Design
- Authors: Jad Rahme, Samy Jelassi, Joan Bruna, S. Matthew Weinberg
- Abstract summary: Design of an incentive compatible auction that maximizes expected revenue is a central problem in Auction Design.
In this work, we consider auction design problems that have permutationequivariant symmetry and construct a neural architecture that is capable of perfectly recovering the permutationequi optimal mechanism.
- Score: 49.41561446069114
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Designing an incentive compatible auction that maximizes expected revenue is
a central problem in Auction Design. Theoretical approaches to the problem have
hit some limits in the past decades and analytical solutions are known for only
a few simple settings. Computational approaches to the problem through the use
of LPs have their own set of limitations. Building on the success of deep
learning, a new approach was recently proposed by Duetting et al. (2019) in
which the auction is modeled by a feed-forward neural network and the design
problem is framed as a learning problem. The neural architectures used in that
work are general purpose and do not take advantage of any of the symmetries the
problem could present, such as permutation equivariance. In this work, we
consider auction design problems that have permutation-equivariant symmetry and
construct a neural architecture that is capable of perfectly recovering the
permutation-equivariant optimal mechanism, which we show is not possible with
the previous architecture. We demonstrate that permutation-equivariant
architectures are not only capable of recovering previous results, they also
have better generalization properties.
Related papers
- ODE Discovery for Longitudinal Heterogeneous Treatment Effects Inference [69.24516189971929]
In this paper, we introduce a new type of solution in the longitudinal setting: a closed-form ordinary differential equation (ODE)
While we still rely on continuous optimization to learn an ODE, the resulting inference machine is no longer a neural network.
arXiv Detail & Related papers (2024-03-16T02:07:45Z) - Solving Inverse Problems with Model Mismatch using Untrained Neural Networks within Model-based Architectures [14.551812310439004]
We introduce an untrained forward model residual block within the model-based architecture to match the data consistency in the measurement domain for each instance.
Our approach offers a unified solution that is less parameter-sensitive, requires no additional data, and enables simultaneous fitting of the forward model and reconstruction in a single pass.
arXiv Detail & Related papers (2024-03-07T19:02:13Z) - Universal Neural Functionals [67.80283995795985]
A challenging problem in many modern machine learning tasks is to process weight-space features.
Recent works have developed promising weight-space models that are equivariant to the permutation symmetries of simple feedforward networks.
This work proposes an algorithm that automatically constructs permutation equivariant models for any weight space.
arXiv Detail & Related papers (2024-02-07T20:12:27Z) - Permutation Equivariant Neural Functionals [92.0667671999604]
This work studies the design of neural networks that can process the weights or gradients of other neural networks.
We focus on the permutation symmetries that arise in the weights of deep feedforward networks because hidden layer neurons have no inherent order.
In our experiments, we find that permutation equivariant neural functionals are effective on a diverse set of tasks.
arXiv Detail & Related papers (2023-02-27T18:52:38Z) - Permutation-Invariant Set Autoencoders with Fixed-Size Embeddings for
Multi-Agent Learning [7.22614468437919]
We introduce a Permutation-Invariant Set Autoencoder (PISA)
PISA produces encodings with significantly lower reconstruction error than existing baselines.
We demonstrate its usefulness in a multi-agent application.
arXiv Detail & Related papers (2023-02-24T18:59:13Z) - Benefits of Permutation-Equivariance in Auction Mechanisms [90.42990121652956]
An auction mechanism that maximizes the auctioneer's revenue while minimizes bidders' ex-post regret is an important yet intricate problem in economics.
Remarkable progress has been achieved through learning the optimal auction mechanism by neural networks.
arXiv Detail & Related papers (2022-10-11T16:13:25Z) - A Variational Inference Approach to Inverse Problems with Gamma
Hyperpriors [60.489902135153415]
This paper introduces a variational iterative alternating scheme for hierarchical inverse problems with gamma hyperpriors.
The proposed variational inference approach yields accurate reconstruction, provides meaningful uncertainty quantification, and is easy to implement.
arXiv Detail & Related papers (2021-11-26T06:33:29Z) - Regularizing Towards Permutation Invariance in Recurrent Models [26.36835670113303]
We show that RNNs can be regularized towards permutation invariance, and that this can result in compact models.
Existing solutions mostly suggest restricting the learning problem to hypothesis classes which are permutation invariant by design.
We show that our method outperforms other permutation invariant approaches on synthetic and real world datasets.
arXiv Detail & Related papers (2020-10-25T07:46:51Z) - Joint learning of variational representations and solvers for inverse
problems with partially-observed data [13.984814587222811]
In this paper, we design an end-to-end framework allowing to learn actual variational frameworks for inverse problems in a supervised setting.
The variational cost and the gradient-based solver are both stated as neural networks using automatic differentiation for the latter.
This leads to a data-driven discovery of variational models.
arXiv Detail & Related papers (2020-06-05T19:53:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.