Federated Learning Based on Dynamic Regularization
- URL: http://arxiv.org/abs/2111.04263v2
- Date: Tue, 9 Nov 2021 16:37:10 GMT
- Title: Federated Learning Based on Dynamic Regularization
- Authors: Durmus Alp Emre Acar, Yue Zhao, Ramon Matas Navarro, Matthew Mattina,
Paul N. Whatmough, Venkatesh Saligrama
- Abstract summary: We propose a novel federated learning method for distributively training neural network models.
Server orchestrates cooperation between a subset of randomly chosen devices in each round.
- Score: 43.137064459520886
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a novel federated learning method for distributively training
neural network models, where the server orchestrates cooperation between a
subset of randomly chosen devices in each round. We view Federated Learning
problem primarily from a communication perspective and allow more device level
computations to save transmission costs. We point out a fundamental dilemma, in
that the minima of the local-device level empirical loss are inconsistent with
those of the global empirical loss. Different from recent prior works, that
either attempt inexact minimization or utilize devices for parallelizing
gradient computation, we propose a dynamic regularizer for each device at each
round, so that in the limit the global and device solutions are aligned. We
demonstrate both through empirical results on real and synthetic data as well
as analytical results that our scheme leads to efficient training, in both
convex and non-convex settings, while being fully agnostic to device
heterogeneity and robust to large number of devices, partial participation and
unbalanced data.
Related papers
- Coordination-free Decentralised Federated Learning on Complex Networks:
Overcoming Heterogeneity [2.6849848612544]
Federated Learning (FL) is a framework for performing a learning task in an edge computing scenario.
We propose a communication-efficient Decentralised Federated Learning (DFL) algorithm able to cope with them.
Our solution allows devices communicating only with their direct neighbours to train an accurate model.
arXiv Detail & Related papers (2023-12-07T18:24:19Z) - Semi-Federated Learning: Convergence Analysis and Optimization of A
Hybrid Learning Framework [70.83511997272457]
We propose a semi-federated learning (SemiFL) paradigm to leverage both the base station (BS) and devices for a hybrid implementation of centralized learning (CL) and FL.
We propose a two-stage algorithm to solve this intractable problem, in which we provide the closed-form solutions to the beamformers.
arXiv Detail & Related papers (2023-10-04T03:32:39Z) - Personalized Federated Learning with Communication Compression [5.389294754404344]
We equip our Loopless Gradient Descent (L2GD) algorithm with a bidirectional communication protocol.
Our algorithm operates on a probabilistic communication protocol, where communication does not happen on a fixed schedule.
arXiv Detail & Related papers (2022-09-12T11:08:44Z) - DRFLM: Distributionally Robust Federated Learning with Inter-client
Noise via Local Mixup [58.894901088797376]
federated learning has emerged as a promising approach for training a global model using data from multiple organizations without leaking their raw data.
We propose a general framework to solve the above two challenges simultaneously.
We provide comprehensive theoretical analysis including robustness analysis, convergence analysis, and generalization ability.
arXiv Detail & Related papers (2022-04-16T08:08:29Z) - Parallel Successive Learning for Dynamic Distributed Model Training over
Heterogeneous Wireless Networks [50.68446003616802]
Federated learning (FedL) has emerged as a popular technique for distributing model training over a set of wireless devices.
We develop parallel successive learning (PSL), which expands the FedL architecture along three dimensions.
Our analysis sheds light on the notion of cold vs. warmed up models, and model inertia in distributed machine learning.
arXiv Detail & Related papers (2022-02-07T05:11:01Z) - Fast Federated Learning in the Presence of Arbitrary Device
Unavailability [26.368873771739715]
Federated Learning (FL) coordinates heterogeneous devices to collaboratively train a shared model while preserving user privacy.
One challenge arises when devices drop out of the training process beyond the central server.
We propose Im Federated Apatientaging (MIFA) to solve this problem.
arXiv Detail & Related papers (2021-06-08T07:46:31Z) - Device Sampling for Heterogeneous Federated Learning: Theory,
Algorithms, and Implementation [24.084053136210027]
We develop a sampling methodology based on graph sequential convolutional networks (GCNs)
We find that our methodology while sampling less than 5% of all devices outperforms conventional federated learning (FedL) substantially both in terms of trained model accuracy and required resource utilization.
arXiv Detail & Related papers (2021-01-04T05:59:50Z) - Federated learning with class imbalance reduction [24.044750119251308]
Federated learning (FL) is a technique that enables a large amount of edge computing devices to collaboratively train a global learning model.
Due to privacy concerns, the raw data on devices could not be available for centralized server.
In this paper, an estimation scheme is designed to reveal the class distribution without the awareness of raw data.
arXiv Detail & Related papers (2020-11-23T08:13:43Z) - Fast-Convergent Federated Learning [82.32029953209542]
Federated learning is a promising solution for distributing machine learning tasks through modern networks of mobile devices.
We propose a fast-convergent federated learning algorithm, called FOLB, which performs intelligent sampling of devices in each round of model training.
arXiv Detail & Related papers (2020-07-26T14:37:51Z) - A Compressive Sensing Approach for Federated Learning over Massive MIMO
Communication Systems [82.2513703281725]
Federated learning is a privacy-preserving approach to train a global model at a central server by collaborating with wireless devices.
We present a compressive sensing approach for federated learning over massive multiple-input multiple-output communication systems.
arXiv Detail & Related papers (2020-03-18T05:56:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.