Precision-Weighted Federated Learning
- URL: http://arxiv.org/abs/2107.09627v1
- Date: Tue, 20 Jul 2021 17:17:10 GMT
- Title: Precision-Weighted Federated Learning
- Authors: Jonatan Reyes, Lisa Di Jorio, Cecile Low-Kam and Marta Kersten-Oertel
- Abstract summary: We propose a novel algorithm that takes into account the variance of the gradients when computing the weighted average of the parameters of models trained in a Federated Learning setting.
Our method was evaluated using standard image classification datasets with two different data partitioning strategies (IID/non-IID) to measure the performance and speed of our method in resource-constrained environments.
- Score: 1.8160945635344528
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Federated Learning using the Federated Averaging algorithm has shown great
advantages for large-scale applications that rely on collaborative learning,
especially when the training data is either unbalanced or inaccessible due to
privacy constraints. We hypothesize that Federated Averaging underestimates the
full extent of heterogeneity of data when the aggregation is performed. We
propose Precision-weighted Federated Learning a novel algorithm that takes into
account the variance of the stochastic gradients when computing the weighted
average of the parameters of models trained in a Federated Learning setting.
With Precision-weighted Federated Learning, we provide an alternate averaging
scheme that leverages the heterogeneity of the data when it has a large
diversity of features in its composition. Our method was evaluated using
standard image classification datasets with two different data partitioning
strategies (IID/non-IID) to measure the performance and speed of our method in
resource-constrained environments, such as mobile and IoT devices. We obtained
a good balance between computational efficiency and convergence rates with
Precision-weighted Federated Learning. Our performance evaluations show 9%
better predictions with MNIST, 18% with Fashion-MNIST, and 5% with CIFAR-10 in
the non-IID setting. Further reliability evaluations ratify the stability in
our method by reaching a 99% reliability index with IID partitions and 96% with
non-IID partitions. In addition, we obtained a 20x speedup on Fashion-MNIST
with only 10 clients and up to 37x with 100 clients participating in the
aggregation concurrently per communication round. The results indicate that
Precision-weighted Federated Learning is an effective and faster alternative
approach for aggregating private data, especially in domains where data is
highly heterogeneous.
Related papers
- Enhancing Federated Learning Convergence with Dynamic Data Queue and Data Entropy-driven Participant Selection [13.825031686864559]
Federated Learning (FL) is a decentralized approach for collaborative model training on edge devices.
We present a method to improve convergence in FL by creating a global subset of data on the server and dynamically distributing it across devices.
Our approach results in a substantial accuracy boost of approximately 5% for the MNIST dataset, around 18% for CIFAR-10, and 20% for CIFAR-100 with a 10% global subset of data, outperforming the state-of-the-art (SOTA) aggregation algorithms.
arXiv Detail & Related papers (2024-10-23T11:47:04Z) - Decoupled Federated Learning on Long-Tailed and Non-IID data with
Feature Statistics [20.781607752797445]
We propose a two-stage Decoupled Federated learning framework using Feature Statistics (DFL-FS)
In the first stage, the server estimates the client's class coverage distributions through masked local feature statistics clustering.
In the second stage, DFL-FS employs federated feature regeneration based on global feature statistics to enhance the model's adaptability to long-tailed data distributions.
arXiv Detail & Related papers (2024-03-13T09:24:59Z) - FLASH: Federated Learning Across Simultaneous Heterogeneities [54.80435317208111]
FLASH(Federated Learning Across Simultaneous Heterogeneities) is a lightweight and flexible client selection algorithm.
It outperforms state-of-the-art FL frameworks under extensive sources of Heterogeneities.
It achieves substantial and consistent improvements over state-of-the-art baselines.
arXiv Detail & Related papers (2024-02-13T20:04:39Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - FedSkip: Combatting Statistical Heterogeneity with Federated Skip
Aggregation [95.85026305874824]
We introduce a data-driven approach called FedSkip to improve the client optima by periodically skipping federated averaging and scattering local models to the cross devices.
We conduct extensive experiments on a range of datasets to demonstrate that FedSkip achieves much higher accuracy, better aggregation efficiency and competing communication efficiency.
arXiv Detail & Related papers (2022-12-14T13:57:01Z) - Fed-CBS: A Heterogeneity-Aware Client Sampling Mechanism for Federated
Learning via Class-Imbalance Reduction [76.26710990597498]
We show that the class-imbalance of the grouped data from randomly selected clients can lead to significant performance degradation.
Based on our key observation, we design an efficient client sampling mechanism, i.e., Federated Class-balanced Sampling (Fed-CBS)
In particular, we propose a measure of class-imbalance and then employ homomorphic encryption to derive this measure in a privacy-preserving way.
arXiv Detail & Related papers (2022-09-30T05:42:56Z) - FedDRL: Deep Reinforcement Learning-based Adaptive Aggregation for
Non-IID Data in Federated Learning [4.02923738318937]
Uneven distribution of local data across different edge devices (clients) results in slow model training and accuracy reduction in federated learning.
This work introduces a novel non-IID type encountered in real-world datasets, namely cluster-skew.
We propose FedDRL, a novel FL model that employs deep reinforcement learning to adaptively determine each client's impact factor.
arXiv Detail & Related papers (2022-08-04T04:24:16Z) - Federated Learning Under Intermittent Client Availability and
Time-Varying Communication Constraints [29.897785907692644]
Federated learning systems operate in settings with intermittent client availability and/or time-varying communication constraints.
We propose F3AST, an unbiased algorithm that learns an availability-dependent client selection strategy.
We show up to 186% and 8% accuracy improvements over FedAvg, and 8% and 7% over FedAdam on CIFAR100 and Shakespeare, respectively.
arXiv Detail & Related papers (2022-05-13T16:08:58Z) - Federated Learning with Heterogeneous Data: A Superquantile Optimization
Approach [0.0]
We present a federated learning framework that is designed to robustly deliver good performance across individual clients with heterogeneous data.
The proposed approach hinges upon aquantile-based learning training that captures the tail statistics of the error.
arXiv Detail & Related papers (2021-12-17T11:00:23Z) - Towards Fair Federated Learning with Zero-Shot Data Augmentation [123.37082242750866]
Federated learning has emerged as an important distributed learning paradigm, where a server aggregates a global model from many client-trained models while having no access to the client data.
We propose a novel federated learning system that employs zero-shot data augmentation on under-represented data to mitigate statistical heterogeneity and encourage more uniform accuracy performance across clients in federated networks.
We study two variants of this scheme, Fed-ZDAC (federated learning with zero-shot data augmentation at the clients) and Fed-ZDAS (federated learning with zero-shot data augmentation at the server).
arXiv Detail & Related papers (2021-04-27T18:23:54Z) - Straggler-Resilient Federated Learning: Leveraging the Interplay Between
Statistical Accuracy and System Heterogeneity [57.275753974812666]
Federated learning involves learning from data samples distributed across a network of clients while the data remains local.
In this paper, we propose a novel straggler-resilient federated learning method that incorporates statistical characteristics of the clients' data to adaptively select the clients in order to speed up the learning procedure.
arXiv Detail & Related papers (2020-12-28T19:21:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.