A Field Guide to Federated Optimization
- URL: http://arxiv.org/abs/2107.06917v1
- Date: Wed, 14 Jul 2021 18:09:08 GMT
- Title: A Field Guide to Federated Optimization
- Authors: Jianyu Wang, Zachary Charles, Zheng Xu, Gauri Joshi, H. Brendan
McMahan, Blaise Aguera y Arcas, Maruan Al-Shedivat, Galen Andrew, Salman
Avestimehr, Katharine Daly, Deepesh Data, Suhas Diggavi, Hubert Eichner,
Advait Gadhikar, Zachary Garrett, Antonious M. Girgis, Filip Hanzely, Andrew
Hard, Chaoyang He, Samuel Horvath, Zhouyuan Huo, Alex Ingerman, Martin Jaggi,
Tara Javidi, Peter Kairouz, Satyen Kale, Sai Praneeth Karimireddy, Jakub
Konecny, Sanmi Koyejo, Tian Li, Luyang Liu, Mehryar Mohri, Hang Qi, Sashank
J. Reddi, Peter Richtarik, Karan Singhal, Virginia Smith, Mahdi
Soltanolkotabi, Weikang Song, Ananda Theertha Suresh, Sebastian U. Stich,
Ameet Talwalkar, Hongyi Wang, Blake Woodworth, Shanshan Wu, Felix X. Yu,
Honglin Yuan, Manzil Zaheer, Mi Zhang, Tong Zhang, Chunxiang Zheng, Chen Zhu,
Wennan Zhu
- Abstract summary: Federated learning and analytics are a distributed approach for collaboratively learning models (or statistics) from decentralized data.
This paper provides recommendations and guidelines on formulating, designing, evaluating and analyzing federated optimization algorithms.
- Score: 161.3779046812383
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning and analytics are a distributed approach for
collaboratively learning models (or statistics) from decentralized data,
motivated by and designed for privacy protection. The distributed learning
process can be formulated as solving federated optimization problems, which
emphasize communication efficiency, data heterogeneity, compatibility with
privacy and system requirements, and other constraints that are not primary
considerations in other problem settings. This paper provides recommendations
and guidelines on formulating, designing, evaluating and analyzing federated
optimization algorithms through concrete examples and practical implementation,
with a focus on conducting effective simulations to infer real-world
performance. The goal of this work is not to survey the current literature, but
to inspire researchers and practitioners to design federated learning
algorithms that can be used in various practical applications.
Related papers
- A Tutorial on the Design, Experimentation and Application of Metaheuristic Algorithms to Real-World Optimization Problems [16.890440704820367]
In spite of decades of historical advancements on the design and use of metaheuristics, large difficulties still remain in regards to the understandability, algorithmic design uprightness, and performance verifiability of new technical achievements.
This work aims at providing the audience with a proposal of good practices which should be embraced when conducting studies about metaheuristics methods used for optimization.
arXiv Detail & Related papers (2024-10-04T07:41:23Z) - Hierarchical Bayes Approach to Personalized Federated Unsupervised
Learning [7.8583640700306585]
We develop algorithms based on optimization criteria inspired by a hierarchical Bayesian statistical framework.
We develop adaptive algorithms that discover the balance between using limited local data and collaborative information.
We evaluate our proposed algorithms using synthetic and real data, demonstrating the effective sample amplification for personalized tasks.
arXiv Detail & Related papers (2024-02-19T20:53:27Z) - RLIF: Interactive Imitation Learning as Reinforcement Learning [56.997263135104504]
We show how off-policy reinforcement learning can enable improved performance under assumptions that are similar but potentially even more practical than those of interactive imitation learning.
Our proposed method uses reinforcement learning with user intervention signals themselves as rewards.
This relaxes the assumption that intervening experts in interactive imitation learning should be near-optimal and enables the algorithm to learn behaviors that improve over the potential suboptimal human expert.
arXiv Detail & Related papers (2023-11-21T21:05:21Z) - UNIDEAL: Curriculum Knowledge Distillation Federated Learning [17.817181326740698]
Federated Learning (FL) has emerged as a promising approach to enable collaborative learning among multiple clients.
In this paper, we present UNI, a novel FL algorithm specifically designed to tackle the challenges of cross-domain scenarios.
Our results demonstrate that UNI achieves superior performance in terms of both model accuracy and communication efficiency.
arXiv Detail & Related papers (2023-09-16T11:30:29Z) - Federated Compositional Deep AUC Maximization [58.25078060952361]
We develop a novel federated learning method for imbalanced data by directly optimizing the area under curve (AUC) score.
To the best of our knowledge, this is the first work to achieve such favorable theoretical results.
arXiv Detail & Related papers (2023-04-20T05:49:41Z) - Personalizing Federated Learning with Over-the-Air Computations [84.8089761800994]
Federated edge learning is a promising technology to deploy intelligence at the edge of wireless networks in a privacy-preserving manner.
Under such a setting, multiple clients collaboratively train a global generic model under the coordination of an edge server.
This paper presents a distributed training paradigm that employs analog over-the-air computation to address the communication bottleneck.
arXiv Detail & Related papers (2023-02-24T08:41:19Z) - Federated Offline Reinforcement Learning [55.326673977320574]
We propose a multi-site Markov decision process model that allows for both homogeneous and heterogeneous effects across sites.
We design the first federated policy optimization algorithm for offline RL with sample complexity.
We give a theoretical guarantee for the proposed algorithm, where the suboptimality for the learned policies is comparable to the rate as if data is not distributed.
arXiv Detail & Related papers (2022-06-11T18:03:26Z) - USCO-Solver: Solving Undetermined Stochastic Combinatorial Optimization
Problems [9.015720257837575]
We consider the regression between spaces, aiming to infer high-quality optimization solutions from samples of input-solution pairs.
For learning foundations, we present learning-error analysis under the PAC-Bayesian framework.
We obtain highly encouraging experimental results for several classic problems on both synthetic and real-world datasets.
arXiv Detail & Related papers (2021-07-15T17:59:08Z) - Decentralized Personalized Federated Learning for Min-Max Problems [79.61785798152529]
This paper is the first to study PFL for saddle point problems encompassing a broader range of optimization problems.
We propose new algorithms to address this problem and provide a theoretical analysis of the smooth (strongly) convex-(strongly) concave saddle point problems.
Numerical experiments for bilinear problems and neural networks with adversarial noise demonstrate the effectiveness of the proposed methods.
arXiv Detail & Related papers (2021-06-14T10:36:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.