Multi-task Federated Edge Learning (MtFEEL) in Wireless Networks
- URL: http://arxiv.org/abs/2108.02517v2
- Date: Sun, 8 Aug 2021 13:59:47 GMT
- Title: Multi-task Federated Edge Learning (MtFEEL) in Wireless Networks
- Authors: Sawan Singh Mahara, Shruti M., B. N. Bharath
- Abstract summary: Federated Learning (FL) has evolved as a promising technique to handle distributed machine learning across edge devices.
A novel communication efficient FL algorithm for personalised learning in a wireless setting with guarantees is presented.
- Score: 1.9250873974729816
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated Learning (FL) has evolved as a promising technique to handle
distributed machine learning across edge devices. A single neural network (NN)
that optimises a global objective is generally learned in most work in FL,
which could be suboptimal for edge devices. Although works finding a NN
personalised for edge device specific tasks exist, they lack generalisation
and/or convergence guarantees. In this paper, a novel communication efficient
FL algorithm for personalised learning in a wireless setting with guarantees is
presented. The algorithm relies on finding a ``better`` empirical estimate of
losses at each device, using a weighted average of the losses across different
devices. It is devised from a Probably Approximately Correct (PAC) bound on the
true loss in terms of the proposed empirical loss and is bounded by (i) the
Rademacher complexity, (ii) the discrepancy, (iii) and a penalty term. Using a
signed gradient feedback to find a personalised NN at each device, it is also
proven to converge in a Rayleigh flat fading (in the uplink) channel, at a rate
of the order max{1/SNR,1/sqrt(T)} Experimental results show that the proposed
algorithm outperforms locally trained devices as well as the conventionally
used FedAvg and FedSGD algorithms under practical SNR regimes.
Related papers
- Rendering Wireless Environments Useful for Gradient Estimators: A Zero-Order Stochastic Federated Learning Method [14.986031916712108]
Cross-device federated learning (FL) is a growing machine learning framework whereby multiple edge devices collaborate to train a model without disclosing their raw data.
We show how to harness the wireless channel in the learning algorithm itself instead of to analyze it remove its impact.
arXiv Detail & Related papers (2024-01-30T21:46:09Z) - FedNAR: Federated Optimization with Normalized Annealing Regularization [54.42032094044368]
We explore the choices of weight decay and identify that weight decay value appreciably influences the convergence of existing FL algorithms.
We develop Federated optimization with Normalized Annealing Regularization (FedNAR), a plug-in that can be seamlessly integrated into any existing FL algorithms.
arXiv Detail & Related papers (2023-10-04T21:11:40Z) - Efficient Dataset Distillation Using Random Feature Approximation [109.07737733329019]
We propose a novel algorithm that uses a random feature approximation (RFA) of the Neural Network Gaussian Process (NNGP) kernel.
Our algorithm provides at least a 100-fold speedup over KIP and can run on a single GPU.
Our new method, termed an RFA Distillation (RFAD), performs competitively with KIP and other dataset condensation algorithms in accuracy over a range of large-scale datasets.
arXiv Detail & Related papers (2022-10-21T15:56:13Z) - A Communication-Efficient Distributed Gradient Clipping Algorithm for
Training Deep Neural Networks [11.461878019780597]
Gradient Descent might converge slowly in some deep neural networks.
It remains mysterious whether gradient clipping scheme can take advantage of multiple machines to enjoy parallel speedup.
arXiv Detail & Related papers (2022-05-10T16:55:33Z) - CoCoFL: Communication- and Computation-Aware Federated Learning via
Partial NN Freezing and Quantization [3.219812767529503]
We present a novel FL technique, CoCoFL, which maintains the full NN structure on all devices.
CoCoFL efficiently utilizes the available resources on devices and allows constrained devices to make a significant contribution to the FL system.
arXiv Detail & Related papers (2022-03-10T16:45:05Z) - Large-scale Optimization of Partial AUC in a Range of False Positive
Rates [51.12047280149546]
The area under the ROC curve (AUC) is one of the most widely used performance measures for classification models in machine learning.
We develop an efficient approximated gradient descent method based on recent practical envelope smoothing technique.
Our proposed algorithm can also be used to minimize the sum of some ranked range loss, which also lacks efficient solvers.
arXiv Detail & Related papers (2022-03-03T03:46:18Z) - Communication-Efficient Stochastic Zeroth-Order Optimization for
Federated Learning [28.65635956111857]
Federated learning (FL) enables edge devices to collaboratively train a global model without sharing their private data.
To enhance the training efficiency of FL, various algorithms have been proposed, ranging from first-order computation to first-order methods.
arXiv Detail & Related papers (2022-01-24T08:56:06Z) - Fast Federated Learning in the Presence of Arbitrary Device
Unavailability [26.368873771739715]
Federated Learning (FL) coordinates heterogeneous devices to collaboratively train a shared model while preserving user privacy.
One challenge arises when devices drop out of the training process beyond the central server.
We propose Im Federated Apatientaging (MIFA) to solve this problem.
arXiv Detail & Related papers (2021-06-08T07:46:31Z) - Finite Versus Infinite Neural Networks: an Empirical Study [69.07049353209463]
kernel methods outperform fully-connected finite-width networks.
Centered and ensembled finite networks have reduced posterior variance.
Weight decay and the use of a large learning rate break the correspondence between finite and infinite networks.
arXiv Detail & Related papers (2020-07-31T01:57:47Z) - Communication-Efficient Distributed Stochastic AUC Maximization with
Deep Neural Networks [50.42141893913188]
We study a distributed variable for large-scale AUC for a neural network as with a deep neural network.
Our model requires a much less number of communication rounds and still a number of communication rounds in theory.
Our experiments on several datasets show the effectiveness of our theory and also confirm our theory.
arXiv Detail & Related papers (2020-05-05T18:08:23Z) - A Compressive Sensing Approach for Federated Learning over Massive MIMO
Communication Systems [82.2513703281725]
Federated learning is a privacy-preserving approach to train a global model at a central server by collaborating with wireless devices.
We present a compressive sensing approach for federated learning over massive multiple-input multiple-output communication systems.
arXiv Detail & Related papers (2020-03-18T05:56:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.