Resource-Efficient Federated Learning
- URL: http://arxiv.org/abs/2111.01108v1
- Date: Mon, 1 Nov 2021 17:21:07 GMT
- Title: Resource-Efficient Federated Learning
- Authors: Ahmed M. Abdelmoniem and Atal Narayan Sahu and Marco Canini and Suhaib
A. Fahmy
- Abstract summary: Federated Learning (FL) enables distributed training by learners using local data.
It presents numerous challenges relating to the data distribution, device capabilities, and participant availability as scale deployments.
- Score: 3.654036881216688
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL) enables distributed training by learners using local
data, thereby enhancing privacy and reducing communication. However, it
presents numerous challenges relating to the heterogeneity of the data
distribution, device capabilities, and participant availability as deployments
scale, which can impact both model convergence and bias. Existing FL schemes
use random participant selection to improve fairness; however, this can result
in inefficient use of resources and lower quality training. In this work, we
systematically address the question of resource efficiency in FL, showing the
benefits of intelligent participant selection, and incorporation of updates
from straggling participants. We demonstrate how these factors enable resource
efficiency while also improving trained model quality.
Related papers
- PFedDST: Personalized Federated Learning with Decentralized Selection Training [8.21688083335571]
We introduce the Personalized Federated Learning with Decentralized Selection Training (PFedDST) framework.
PFedDST enhances model training by allowing devices to strategically evaluate and select peers based on a comprehensive communication score.
Our experiments demonstrate that PFedDST not only enhances model accuracy but also accelerates convergence.
arXiv Detail & Related papers (2025-02-11T18:25:48Z) - Over-the-Air Fair Federated Learning via Multi-Objective Optimization [52.295563400314094]
We propose an over-the-air fair federated learning algorithm (OTA-FFL) to train fair FL models.
Experiments demonstrate the superiority of OTA-FFL in achieving fairness and robust performance.
arXiv Detail & Related papers (2025-01-06T21:16:51Z) - Online Client Scheduling and Resource Allocation for Efficient Federated Edge Learning [9.451084740123198]
Federated learning (FL) enables edge devices to collaboratively train a machine learning model without sharing their raw data.
However, deploying FL over mobile edge networks with constrained resources such as power, bandwidth, and suffers from high training latency and low model accuracy.
This paper investigates the optimal client scheduling and resource allocation for FL over mobile edge networks under resource constraints and uncertainty.
arXiv Detail & Related papers (2024-09-29T01:56:45Z) - Seamless Integration: Sampling Strategies in Federated Learning Systems [0.0]
Federated Learning (FL) represents a paradigm shift in the field of machine learning.
The seamless integration of new clients is imperative to sustain and enhance the performance of FL systems.
This paper outlines strategies for effective client selection strategies and solutions for ensuring system scalability and stability.
arXiv Detail & Related papers (2024-08-18T17:16:49Z) - Evaluating and Incentivizing Diverse Data Contributions in Collaborative
Learning [89.21177894013225]
For a federated learning model to perform well, it is crucial to have a diverse and representative dataset.
We show that the statistical criterion used to quantify the diversity of the data, as well as the choice of the federated learning algorithm used, has a significant effect on the resulting equilibrium.
We leverage this to design simple optimal federated learning mechanisms that encourage data collectors to contribute data representative of the global population.
arXiv Detail & Related papers (2023-06-08T23:38:25Z) - Efficient Split-Mix Federated Learning for On-Demand and In-Situ
Customization [107.72786199113183]
Federated learning (FL) provides a distributed learning framework for multiple participants to collaborate learning without sharing raw data.
In this paper, we propose a novel Split-Mix FL strategy for heterogeneous participants that, once training is done, provides in-situ customization of model sizes and robustness.
arXiv Detail & Related papers (2022-03-18T04:58:34Z) - Federated Learning over Wireless IoT Networks with Optimized
Communication and Resources [98.18365881575805]
Federated learning (FL) as a paradigm of collaborative learning techniques has obtained increasing research attention.
It is of interest to investigate fast responding and accurate FL schemes over wireless systems.
We show that the proposed communication-efficient federated learning framework converges at a strong linear rate.
arXiv Detail & Related papers (2021-10-22T13:25:57Z) - Communication-Efficient Hierarchical Federated Learning for IoT
Heterogeneous Systems with Imbalanced Data [42.26599494940002]
Federated learning (FL) is a distributed learning methodology that allows multiple nodes to cooperatively train a deep learning model.
This paper studies the potential of hierarchical FL in IoT heterogeneous systems.
It proposes an optimized solution for user assignment and resource allocation on multiple edge nodes.
arXiv Detail & Related papers (2021-07-14T08:32:39Z) - Federated Robustness Propagation: Sharing Adversarial Robustness in
Federated Learning [98.05061014090913]
Federated learning (FL) emerges as a popular distributed learning schema that learns from a set of participating users without requiring raw data to be shared.
adversarial training (AT) provides a sound solution for centralized learning, extending its usage for FL users has imposed significant challenges.
We show that existing FL techniques cannot effectively propagate adversarial robustness among non-iid users.
We propose a simple yet effective propagation approach that transfers robustness through carefully designed batch-normalization statistics.
arXiv Detail & Related papers (2021-06-18T15:52:33Z) - A Principled Approach to Data Valuation for Federated Learning [73.19984041333599]
Federated learning (FL) is a popular technique to train machine learning (ML) models on decentralized data sources.
The Shapley value (SV) defines a unique payoff scheme that satisfies many desiderata for a data value notion.
This paper proposes a variant of the SV amenable to FL, which we call the federated Shapley value.
arXiv Detail & Related papers (2020-09-14T04:37:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.