EigenGame Unloaded: When playing games is better than optimizing
- URL: http://arxiv.org/abs/2102.04152v1
- Date: Mon, 8 Feb 2021 12:04:59 GMT
- Title: EigenGame Unloaded: When playing games is better than optimizing
- Authors: Ian Gemp and Brian McWilliams and Claire Vernade and Thore Graepel
- Abstract summary: EigenGame views eigendecomposition as a competitive game.
We build on the recently proposed EigenGame that views eigendecomposition as a competitive game.
- Score: 19.522120239876486
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We build on the recently proposed EigenGame that views eigendecomposition as
a competitive game. EigenGame's updates are biased if computed using
minibatches of data, which hinders convergence and more sophisticated
parallelism in the stochastic setting. In this work, we propose an unbiased
stochastic update that is asymptotically equivalent to EigenGame, enjoys
greater parallelism allowing computation on datasets of larger sample sizes,
and outperforms EigenGame in experiments. We present applications to finding
the principal components of massive datasets and performing spectral clustering
of graphs. We analyze and discuss our proposed update in the context of
EigenGame and the shift in perspective from optimization to games.
Related papers
- CNN-based Game State Detection for a Foosball Table [1.612440288407791]
In the game of Foosball, a compact and comprehensive game state description consists of the positional shifts and rotations of the figures and the position of the ball over time.
In this paper, a figure detection system to determine the game state in Foosball is presented.
This dataset is utilized to train Convolutional Neural Network (CNN) based end-to-end regression models to predict the rotations and shifts of each rod.
arXiv Detail & Related papers (2024-04-08T09:48:02Z) - Auto-Encoding Bayesian Inverse Games [36.06617326128679]
We consider the inverse game problem, in which some properties of the game are unknown a priori.
Existing maximum likelihood estimation approaches to solve inverse games provide only point estimates of unknown parameters.
We take a Bayesian perspective and construct posterior distributions of game parameters.
This structured VAE can be trained from an unlabeled dataset of observed interactions.
arXiv Detail & Related papers (2024-02-14T02:17:37Z) - SMaRt: Improving GANs with Score Matching Regularity [94.81046452865583]
Generative adversarial networks (GANs) usually struggle in learning from highly diverse data, whose underlying manifold is complex.
We show that score matching serves as a promising solution to this issue thanks to its capability of persistently pushing the generated data points towards the real data manifold.
We propose to improve the optimization of GANs with score matching regularity (SMaRt)
arXiv Detail & Related papers (2023-11-30T03:05:14Z) - On the Convergence of No-Regret Learning Dynamics in Time-Varying Games [89.96815099996132]
We characterize the convergence of optimistic gradient descent (OGD) in time-varying games.
Our framework yields sharp convergence bounds for the equilibrium gap of OGD in zero-sum games.
We also provide new insights on dynamic regret guarantees in static games.
arXiv Detail & Related papers (2023-01-26T17:25:45Z) - Learning Correlated Equilibria in Mean-Field Games [62.14589406821103]
We develop the concepts of Mean-Field correlated and coarse-correlated equilibria.
We show that they can be efficiently learnt in emphall games, without requiring any additional assumption on the structure of the game.
arXiv Detail & Related papers (2022-08-22T08:31:46Z) - A Closer Look at Debiased Temporal Sentence Grounding in Videos:
Dataset, Metric, and Approach [53.727460222955266]
Temporal Sentence Grounding in Videos (TSGV) aims to ground a natural language sentence in an untrimmed video.
Recent studies have found that current benchmark datasets may have obvious moment annotation biases.
We introduce a new evaluation metric "dR@n,IoU@m" that discounts the basic recall scores to alleviate the inflating evaluation caused by biased datasets.
arXiv Detail & Related papers (2022-03-10T08:58:18Z) - An algorithmic solution to the Blotto game using multi-marginal
couplings [27.35514815248438]
We describe an efficient algorithm to compute solutions for the general two-player Blotto game on n battlefields with heterogeneous values.
Our algorithm samples from an eps-optimal solution in time O(n2 + eps-4) independently of budgets and battlefield values.
In the case of asymmetric values where optimal solutions need not exist but Nash equilibria do, our algorithm samples from an eps-Nash equilibrium with similar complexity.
arXiv Detail & Related papers (2022-02-15T11:07:31Z) - Priming PCA with EigenGame [0.0]
We introduce primed-PCA, an extension of the recently proposed EigenGame algorithm for computing principal components in a large-scale setup.
Our algorithm first runs EigenGame to get an approximation of the principal components, and then applies an exact PCA in the subspace they span.
In our experiments we achieve improvements in convergence speed by factors of 5-25 on the datasets of the original EigenGame paper.
arXiv Detail & Related papers (2021-09-08T15:16:53Z) - Efficient Pure Exploration for Combinatorial Bandits with Semi-Bandit
Feedback [51.21673420940346]
Combinatorial bandits generalize multi-armed bandits, where the agent chooses sets of arms and observes a noisy reward for each arm contained in the chosen set.
We focus on the pure-exploration problem of identifying the best arm with fixed confidence, as well as a more general setting, where the structure of the answer set differs from the one of the action set.
Based on a projection-free online learning algorithm for finite polytopes, it is the first computationally efficient algorithm which is convexally optimal and has competitive empirical performance.
arXiv Detail & Related papers (2021-01-21T10:35:09Z) - Stochastic Optimization with Laggard Data Pipelines [65.20044914532221]
We show that "dataechoed" extensions of common optimization methods exhibit provable improvements over their synchronous counterparts.
Specifically, we show that in convex optimization with minibatches, data echoing affords speedups on the curvature-dominated part of the convergence rate, while maintaining the optimal statistical rate.
arXiv Detail & Related papers (2020-10-26T14:55:31Z) - Exponential Convergence of Gradient Methods in Concave Network Zero-sum
Games [6.129776019898013]
We study the computation of Nash equilibrium in concave network zero-sum games (NZSGs)
We show that various game theoretic properties of convex-concave two-player zero-sum games are preserved in this generalization.
arXiv Detail & Related papers (2020-07-10T16:56:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.