CoCoFL: Communication- and Computation-Aware Federated Learning via
Partial NN Freezing and Quantization
- URL: http://arxiv.org/abs/2203.05468v3
- Date: Wed, 28 Jun 2023 15:36:27 GMT
- Title: CoCoFL: Communication- and Computation-Aware Federated Learning via
Partial NN Freezing and Quantization
- Authors: Kilian Pfeiffer, Martin Rapp, Ramin Khalili, J\"org Henkel
- Abstract summary: We present a novel FL technique, CoCoFL, which maintains the full NN structure on all devices.
CoCoFL efficiently utilizes the available resources on devices and allows constrained devices to make a significant contribution to the FL system.
- Score: 3.219812767529503
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Devices participating in federated learning (FL) typically have heterogeneous
communication, computation, and memory resources. However, in synchronous FL,
all devices need to finish training by the same deadline dictated by the
server. Our results show that training a smaller subset of the neural network
(NN) at constrained devices, i.e., dropping neurons/filters as proposed by
state of the art, is inefficient, preventing these devices to make an effective
contribution to the model. This causes unfairness w.r.t the achievable
accuracies of constrained devices, especially in cases with a skewed
distribution of class labels across devices. We present a novel FL technique,
CoCoFL, which maintains the full NN structure on all devices. To adapt to the
devices' heterogeneous resources, CoCoFL freezes and quantizes selected layers,
reducing communication, computation, and memory requirements, whereas other
layers are still trained in full precision, enabling to reach a high accuracy.
Thereby, CoCoFL efficiently utilizes the available resources on devices and
allows constrained devices to make a significant contribution to the FL system,
increasing fairness among participants (accuracy parity) and significantly
improving the final accuracy of the model.
Related papers
- Communication Efficient ConFederated Learning: An Event-Triggered SAGA
Approach [67.27031215756121]
Federated learning (FL) is a machine learning paradigm that targets model training without gathering the local data over various data sources.
Standard FL, which employs a single server, can only support a limited number of users, leading to degraded learning capability.
In this work, we consider a multi-server FL framework, referred to as emphConfederated Learning (CFL) in order to accommodate a larger number of users.
arXiv Detail & Related papers (2024-02-28T03:27:10Z) - Adaptive Model Pruning and Personalization for Federated Learning over
Wireless Networks [72.59891661768177]
Federated learning (FL) enables distributed learning across edge devices while protecting data privacy.
We consider a FL framework with partial model pruning and personalization to overcome these challenges.
This framework splits the learning model into a global part with model pruning shared with all devices to learn data representations and a personalized part to be fine-tuned for a specific device.
arXiv Detail & Related papers (2023-09-04T21:10:45Z) - Aggregating Capacity in FL through Successive Layer Training for
Computationally-Constrained Devices [3.4530027457862]
Federated learning (FL) is usually performed on resource-constrained edge devices.
FL training process should be adjusted to such constraints.
We propose a new method that enables successive freezing and training of the parameters of the FL model at devices.
arXiv Detail & Related papers (2023-05-26T15:04:06Z) - Online Data Selection for Federated Learning with Limited Storage [53.46789303416799]
Federated Learning (FL) has been proposed to achieve distributed machine learning among networked devices.
The impact of on-device storage on the performance of FL is still not explored.
In this work, we take the first step to consider the online data selection for FL with limited on-device storage.
arXiv Detail & Related papers (2022-09-01T03:27:33Z) - SlimFL: Federated Learning with Superposition Coding over Slimmable
Neural Networks [56.68149211499535]
Federated learning (FL) is a key enabler for efficient communication and computing leveraging devices' distributed computing capabilities.
This paper proposes a novel learning framework by integrating FL and width-adjustable slimmable neural networks (SNNs)
We propose a communication and energy-efficient SNN-based FL (named SlimFL) that jointly utilizes superposition coding (SC) for global model aggregation and superposition training (ST) for updating local models.
arXiv Detail & Related papers (2022-03-26T15:06:13Z) - Resource-Efficient and Delay-Aware Federated Learning Design under Edge
Heterogeneity [10.702853653891902]
Federated learning (FL) has emerged as a popular methodology for distributing machine learning across wireless edge devices.
In this work, we consider optimizing the tradeoff between model performance and resource utilization in FL.
Our proposed StoFedDelAv incorporates a localglobal model combiner into the FL computation step.
arXiv Detail & Related papers (2021-12-27T22:30:15Z) - Joint Superposition Coding and Training for Federated Learning over
Multi-Width Neural Networks [52.93232352968347]
This paper aims to integrate two synergetic technologies, federated learning (FL) and width-adjustable slimmable neural network (SNN)
FL preserves data privacy by exchanging the locally trained models of mobile devices. SNNs are however non-trivial, particularly under wireless connections with time-varying channel conditions.
We propose a communication and energy-efficient SNN-based FL (named SlimFL) that jointly utilizes superposition coding (SC) for global model aggregation and superposition training (ST) for updating local models.
arXiv Detail & Related papers (2021-12-05T11:17:17Z) - On the Tradeoff between Energy, Precision, and Accuracy in Federated
Quantized Neural Networks [68.52621234990728]
Federated learning (FL) over wireless networks requires balancing between accuracy, energy efficiency, and precision.
We propose a quantized FL framework that represents data with a finite level of precision in both local training and uplink transmission.
Our framework can reduce energy consumption by up to 53% compared to a standard FL model.
arXiv Detail & Related papers (2021-11-15T17:00:03Z) - Fast Federated Learning in the Presence of Arbitrary Device
Unavailability [26.368873771739715]
Federated Learning (FL) coordinates heterogeneous devices to collaboratively train a shared model while preserving user privacy.
One challenge arises when devices drop out of the training process beyond the central server.
We propose Im Federated Apatientaging (MIFA) to solve this problem.
arXiv Detail & Related papers (2021-06-08T07:46:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.