Optimality and Stability in Federated Learning: A Game-theoretic
Approach
- URL: http://arxiv.org/abs/2106.09580v1
- Date: Thu, 17 Jun 2021 15:03:51 GMT
- Title: Optimality and Stability in Federated Learning: A Game-theoretic
Approach
- Authors: Kate Donahue and Jon Kleinberg
- Abstract summary: We motivate and define a notion of optimality given by the average error rates among federating agents (players)
First, we show that for some regions of parameter space, all stable arrangements are optimal (Price of Anarchy equal to 1).
However, we show this is not true for all settings: there exist examples of stable arrangements with higher cost than optimal (Price of Anarchy greater than 1).
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning is a distributed learning paradigm where multiple agents,
each only with access to local data, jointly learn a global model. There has
recently been an explosion of research aiming not only to improve the accuracy
rates of federated learning, but also provide certain guarantees around social
good properties such as total error. One branch of this research has taken a
game-theoretic approach, and in particular, prior work has viewed federated
learning as a hedonic game, where error-minimizing players arrange themselves
into federating coalitions. This past work proves the existence of stable
coalition partitions, but leaves open a wide range of questions, including how
far from optimal these stable solutions are. In this work, we motivate and
define a notion of optimality given by the average error rates among federating
agents (players). First, we provide and prove the correctness of an efficient
algorithm to calculate an optimal (error minimizing) arrangement of players.
Next, we analyze the relationship between the stability and optimality of an
arrangement. First, we show that for some regions of parameter space, all
stable arrangements are optimal (Price of Anarchy equal to 1). However, we show
this is not true for all settings: there exist examples of stable arrangements
with higher cost than optimal (Price of Anarchy greater than 1). Finally, we
give the first constant-factor bound on the performance gap between stability
and optimality, proving that the total error of the worst stable solution can
be no higher than 9 times the total error of an optimal solution (Price of
Anarchy bound of 9).
Related papers
- Stable Matching with Ties: Approximation Ratios and Learning [20.84610303094647]
We study the problem of matching markets with ties, where one side of the market does not necessarily have strict preferences over members at its other side.
We aim to guarantee each worker the largest possible share from the utility in her best possible stable matching.
arXiv Detail & Related papers (2024-11-05T17:14:46Z) - Optimal Multi-Fidelity Best-Arm Identification [65.23078799972188]
In bandit best-arm identification, an algorithm is tasked with finding the arm with highest mean reward with a specified accuracy as fast as possible.
We study multi-fidelity best-arm identification, in which the can choose to sample an arm at a lower fidelity (less accurate mean estimate) for a lower cost.
Several methods have been proposed for tackling this problem, but their optimality remain elusive, notably due to loose lower bounds on the total cost needed to identify the best arm.
arXiv Detail & Related papers (2024-06-05T08:02:40Z) - Distributed Fractional Bayesian Learning for Adaptive Optimization [7.16937736207894]
This paper considers a distributed adaptive optimization problem, where all agents only have access to their local cost functions with a common unknown parameter.
We aim to provide valuable insights for addressing parameter uncertainty in distributed optimization problems and simultaneously find the optimal solution.
arXiv Detail & Related papers (2024-04-17T13:09:33Z) - Adaptive, Doubly Optimal No-Regret Learning in Strongly Monotone and Exp-Concave Games with Gradient Feedback [75.29048190099523]
Online gradient descent (OGD) is well known to be doubly optimal under strong convexity or monotonicity assumptions.
In this paper, we design a fully adaptive OGD algorithm, textsfAdaOGD, that does not require a priori knowledge of these parameters.
arXiv Detail & Related papers (2023-10-21T18:38:13Z) - Offline Learning in Markov Games with General Function Approximation [22.2472618685325]
We study offline multi-agent reinforcement learning (RL) in Markov games.
We provide the first framework for sample-efficient offline learning in Markov games.
arXiv Detail & Related papers (2023-02-06T05:22:27Z) - Fully Stochastic Trust-Region Sequential Quadratic Programming for
Equality-Constrained Optimization Problems [62.83783246648714]
We propose a sequential quadratic programming algorithm (TR-StoSQP) to solve nonlinear optimization problems with objectives and deterministic equality constraints.
The algorithm adaptively selects the trust-region radius and, compared to the existing line-search StoSQP schemes, allows us to utilize indefinite Hessian matrices.
arXiv Detail & Related papers (2022-11-29T05:52:17Z) - Learning Proximal Operators to Discover Multiple Optima [66.98045013486794]
We present an end-to-end method to learn the proximal operator across non-family problems.
We show that for weakly-ized objectives and under mild conditions, the method converges globally.
arXiv Detail & Related papers (2022-01-28T05:53:28Z) - Understanding the Effect of Stochasticity in Policy Optimization [86.7574122154668]
We show that the preferability of optimization methods depends critically on whether exact gradients are used.
Second, to explain these findings we introduce the concept of committal rate for policy optimization.
Third, we show that in the absence of external oracle information, there is an inherent trade-off between exploiting geometry to accelerate convergence versus achieving optimality almost surely.
arXiv Detail & Related papers (2021-10-29T06:35:44Z) - Achieving Statistical Optimality of Federated Learning: Beyond
Stationary Points [19.891597817559038]
Federated Learning (FL) is a promising framework that has great potentials in privacy preservation and in lowering the computation load at the cloud.
Recent work raised concerns on two methods: (1) their fixed points do not correspond to the stationary points of the original optimization problem, and (2) the common model found might not generalize well locally.
We show, in the general kernel regression setting, that both FedAvg and FedProx converge to the minimax-optimal error rates.
arXiv Detail & Related papers (2021-06-29T09:59:43Z) - A Hybrid 2-stage Neural Optimization for Pareto Front Extraction [3.918940900258555]
A major obstacle to optimal trade-off solutions is that they don't always converge to each other.
We propose a two-stage approach that is accurate and cost-effective.
arXiv Detail & Related papers (2021-01-27T20:56:19Z) - An Asymptotically Optimal Primal-Dual Incremental Algorithm for
Contextual Linear Bandits [129.1029690825929]
We introduce a novel algorithm improving over the state-of-the-art along multiple dimensions.
We establish minimax optimality for any learning horizon in the special case of non-contextual linear bandits.
arXiv Detail & Related papers (2020-10-23T09:12:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.