Asynchronous Federated Learning with Reduced Number of Rounds and with
Differential Privacy from Less Aggregated Gaussian Noise
- URL: http://arxiv.org/abs/2007.09208v1
- Date: Fri, 17 Jul 2020 19:47:16 GMT
- Title: Asynchronous Federated Learning with Reduced Number of Rounds and with
Differential Privacy from Less Aggregated Gaussian Noise
- Authors: Marten van Dijk, Nhuong V. Nguyen, Toan N. Nguyen, Lam M. Nguyen, Quoc
Tran-Dinh, Phuong Ha Nguyen
- Abstract summary: We propose a new algorithm for asynchronous federated learning which eliminates waiting times and reduces overall network communication.
We provide rigorous theoretical analysis for strongly convex objective functions and provide simulation results.
- Score: 26.9902939745173
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The feasibility of federated learning is highly constrained by the
server-clients infrastructure in terms of network communication. Most newly
launched smartphones and IoT devices are equipped with GPUs or sufficient
computing hardware to run powerful AI models. However, in case of the original
synchronous federated learning, client devices suffer waiting times and regular
communication between clients and server is required. This implies more
sensitivity to local model training times and irregular or missed updates,
hence, less or limited scalability to large numbers of clients and convergence
rates measured in real time will suffer. We propose a new algorithm for
asynchronous federated learning which eliminates waiting times and reduces
overall network communication - we provide rigorous theoretical analysis for
strongly convex objective functions and provide simulation results. By adding
Gaussian noise we show how our algorithm can be made differentially private --
new theorems show how the aggregated added Gaussian noise is significantly
reduced.
Related papers
- Communication-Efficient Federated Learning by Quantized Variance Reduction for Heterogeneous Wireless Edge Networks [55.467288506826755]
Federated learning (FL) has been recognized as a viable solution for local-privacy-aware collaborative model training in wireless edge networks.
Most existing communication-efficient FL algorithms fail to reduce the significant inter-device variance.
We propose a novel communication-efficient FL algorithm, named FedQVR, which relies on a sophisticated variance-reduced scheme.
arXiv Detail & Related papers (2025-01-20T04:26:21Z) - Federated Split Learning with Model Pruning and Gradient Quantization in Wireless Networks [7.439160287320074]
Federated split learning (FedSL) implements collaborative training across the edge devices and the server through model splitting.
We propose a lightweight FedSL scheme, that further alleviates the training burden on resource-constrained edge devices.
We conduct theoretical analysis to quantify the convergence performance of the proposed scheme.
arXiv Detail & Related papers (2024-12-09T11:43:03Z) - FedCompass: Efficient Cross-Silo Federated Learning on Heterogeneous
Client Devices using a Computing Power Aware Scheduler [5.550660753625296]
Cross-silo federated learning offers a promising solution to collaboratively train AI models without compromising privacy of local datasets.
In this paper, we introduce an innovative semi-aware Fedasynchronous federated learning algorithm with a computing power scheduler on the server side.
We demonstrate that Fed achieves faster convergence and accuracy than other algorithms when performing federated learning on higher clients.
arXiv Detail & Related papers (2023-09-26T05:03:13Z) - Adaptive Model Pruning and Personalization for Federated Learning over
Wireless Networks [72.59891661768177]
Federated learning (FL) enables distributed learning across edge devices while protecting data privacy.
We consider a FL framework with partial model pruning and personalization to overcome these challenges.
This framework splits the learning model into a global part with model pruning shared with all devices to learn data representations and a personalized part to be fine-tuned for a specific device.
arXiv Detail & Related papers (2023-09-04T21:10:45Z) - FAVANO: Federated AVeraging with Asynchronous NOdes [14.412305295989444]
We propose a novel centralized Asynchronous Federated Learning (FL) framework, FAVANO, for training Deep Neural Networks (DNNs) in resource constrained environments.
arXiv Detail & Related papers (2023-05-25T14:30:17Z) - Communication-Efficient Device Scheduling for Federated Learning Using
Stochastic Optimization [26.559267845906746]
Time learning (FL) is a useful tool in distributed machine learning that utilizes users' local datasets in a privacy-preserving manner.
In this paper, we provide a novel convergence analysis non-efficient convergence bound algorithm.
We also develop a new selection and power allocation algorithm that minimizes a function of the convergence bound and the average communication under a power constraint.
arXiv Detail & Related papers (2022-01-19T23:25:24Z) - Collaborative Learning over Wireless Networks: An Introductory Overview [84.09366153693361]
We will mainly focus on collaborative training across wireless devices.
Many distributed optimization algorithms have been developed over the last decades.
They provide data locality; that is, a joint model can be trained collaboratively while the data available at each participating device remains local.
arXiv Detail & Related papers (2021-12-07T20:15:39Z) - Wireless Federated Learning with Limited Communication and Differential
Privacy [21.328507360172203]
This paper investigates the role of dimensionality reduction in efficient communication and differential privacy (DP) of the local datasets at the remote users for over-the-air computation (AirComp)-based federated learning (FL) model.
arXiv Detail & Related papers (2021-06-01T15:23:12Z) - Edge Federated Learning Via Unit-Modulus Over-The-Air Computation
(Extended Version) [64.76619508293966]
This paper proposes a unit-modulus over-the-air computation (UM-AirComp) framework to facilitate efficient edge federated learning.
It uploads simultaneously local model parameters and updates global model parameters via analog beamforming.
We demonstrate the implementation of UM-AirComp in a vehicle-to-everything autonomous driving simulation platform.
arXiv Detail & Related papers (2021-01-28T15:10:22Z) - Fast-Convergent Federated Learning [82.32029953209542]
Federated learning is a promising solution for distributing machine learning tasks through modern networks of mobile devices.
We propose a fast-convergent federated learning algorithm, called FOLB, which performs intelligent sampling of devices in each round of model training.
arXiv Detail & Related papers (2020-07-26T14:37:51Z) - A Compressive Sensing Approach for Federated Learning over Massive MIMO
Communication Systems [82.2513703281725]
Federated learning is a privacy-preserving approach to train a global model at a central server by collaborating with wireless devices.
We present a compressive sensing approach for federated learning over massive multiple-input multiple-output communication systems.
arXiv Detail & Related papers (2020-03-18T05:56:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.