Asynchronous Federated Learning with Reduced Number of Rounds and with
Differential Privacy from Less Aggregated Gaussian Noise
- URL: http://arxiv.org/abs/2007.09208v1
- Date: Fri, 17 Jul 2020 19:47:16 GMT
- Title: Asynchronous Federated Learning with Reduced Number of Rounds and with
Differential Privacy from Less Aggregated Gaussian Noise
- Authors: Marten van Dijk, Nhuong V. Nguyen, Toan N. Nguyen, Lam M. Nguyen, Quoc
Tran-Dinh, Phuong Ha Nguyen
- Abstract summary: We propose a new algorithm for asynchronous federated learning which eliminates waiting times and reduces overall network communication.
We provide rigorous theoretical analysis for strongly convex objective functions and provide simulation results.
- Score: 26.9902939745173
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The feasibility of federated learning is highly constrained by the
server-clients infrastructure in terms of network communication. Most newly
launched smartphones and IoT devices are equipped with GPUs or sufficient
computing hardware to run powerful AI models. However, in case of the original
synchronous federated learning, client devices suffer waiting times and regular
communication between clients and server is required. This implies more
sensitivity to local model training times and irregular or missed updates,
hence, less or limited scalability to large numbers of clients and convergence
rates measured in real time will suffer. We propose a new algorithm for
asynchronous federated learning which eliminates waiting times and reduces
overall network communication - we provide rigorous theoretical analysis for
strongly convex objective functions and provide simulation results. By adding
Gaussian noise we show how our algorithm can be made differentially private --
new theorems show how the aggregated added Gaussian noise is significantly
reduced.
Related papers
- FedCompass: Efficient Cross-Silo Federated Learning on Heterogeneous
Client Devices using a Computing Power Aware Scheduler [5.550660753625296]
Cross-silo federated learning offers a promising solution to collaboratively train AI models without compromising privacy of local datasets.
In this paper, we introduce an innovative semi-aware Fedasynchronous federated learning algorithm with a computing power scheduler on the server side.
We demonstrate that Fed achieves faster convergence and accuracy than other algorithms when performing federated learning on higher clients.
arXiv Detail & Related papers (2023-09-26T05:03:13Z) - Adaptive Model Pruning and Personalization for Federated Learning over
Wireless Networks [72.59891661768177]
Federated learning (FL) enables distributed learning across edge devices while protecting data privacy.
We consider a FL framework with partial model pruning and personalization to overcome these challenges.
This framework splits the learning model into a global part with model pruning shared with all devices to learn data representations and a personalized part to be fine-tuned for a specific device.
arXiv Detail & Related papers (2023-09-04T21:10:45Z) - FedDCT: A Dynamic Cross-Tier Federated Learning Scheme in Wireless
Communication Networks [1.973745731206255]
Federated Learning (FL) enables the training of a global model among clients without exposing local data.
We propose a novel dynamic cross-tier FL scheme, named FedDCT, to increase training accuracy and performance in wireless communication networks.
arXiv Detail & Related papers (2023-07-10T08:54:07Z) - FAVANO: Federated AVeraging with Asynchronous NOdes [14.412305295989444]
We propose a novel centralized Asynchronous Federated Learning (FL) framework, FAVANO, for training Deep Neural Networks (DNNs) in resource constrained environments.
arXiv Detail & Related papers (2023-05-25T14:30:17Z) - Communication-Efficient Device Scheduling for Federated Learning Using
Stochastic Optimization [26.559267845906746]
Time learning (FL) is a useful tool in distributed machine learning that utilizes users' local datasets in a privacy-preserving manner.
In this paper, we provide a novel convergence analysis non-efficient convergence bound algorithm.
We also develop a new selection and power allocation algorithm that minimizes a function of the convergence bound and the average communication under a power constraint.
arXiv Detail & Related papers (2022-01-19T23:25:24Z) - Collaborative Learning over Wireless Networks: An Introductory Overview [84.09366153693361]
We will mainly focus on collaborative training across wireless devices.
Many distributed optimization algorithms have been developed over the last decades.
They provide data locality; that is, a joint model can be trained collaboratively while the data available at each participating device remains local.
arXiv Detail & Related papers (2021-12-07T20:15:39Z) - SignalNet: A Low Resolution Sinusoid Decomposition and Estimation
Network [79.04274563889548]
We propose SignalNet, a neural network architecture that detects the number of sinusoids and estimates their parameters from quantized in-phase and quadrature samples.
We introduce a worst-case learning threshold for comparing the results of our network relative to the underlying data distributions.
In simulation, we find that our algorithm is always able to surpass the threshold for three-bit data but often cannot exceed the threshold for one-bit data.
arXiv Detail & Related papers (2021-06-10T04:21:20Z) - Wireless Federated Learning with Limited Communication and Differential
Privacy [21.328507360172203]
This paper investigates the role of dimensionality reduction in efficient communication and differential privacy (DP) of the local datasets at the remote users for over-the-air computation (AirComp)-based federated learning (FL) model.
arXiv Detail & Related papers (2021-06-01T15:23:12Z) - Edge Federated Learning Via Unit-Modulus Over-The-Air Computation
(Extended Version) [64.76619508293966]
This paper proposes a unit-modulus over-the-air computation (UM-AirComp) framework to facilitate efficient edge federated learning.
It uploads simultaneously local model parameters and updates global model parameters via analog beamforming.
We demonstrate the implementation of UM-AirComp in a vehicle-to-everything autonomous driving simulation platform.
arXiv Detail & Related papers (2021-01-28T15:10:22Z) - Fast-Convergent Federated Learning [82.32029953209542]
Federated learning is a promising solution for distributing machine learning tasks through modern networks of mobile devices.
We propose a fast-convergent federated learning algorithm, called FOLB, which performs intelligent sampling of devices in each round of model training.
arXiv Detail & Related papers (2020-07-26T14:37:51Z) - A Compressive Sensing Approach for Federated Learning over Massive MIMO
Communication Systems [82.2513703281725]
Federated learning is a privacy-preserving approach to train a global model at a central server by collaborating with wireless devices.
We present a compressive sensing approach for federated learning over massive multiple-input multiple-output communication systems.
arXiv Detail & Related papers (2020-03-18T05:56:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.