CyclicFL: A Cyclic Model Pre-Training Approach to Efficient Federated Learning
- URL: http://arxiv.org/abs/2301.12193v2
- Date: Thu, 5 Sep 2024 12:10:06 GMT
- Title: CyclicFL: A Cyclic Model Pre-Training Approach to Efficient Federated Learning
- Authors: Pengyu Zhang, Yingbo Zhou, Ming Hu, Xian Wei, Mingsong Chen,
- Abstract summary: Federated learning (FL) has been proposed to enable distributed learning on Artificial Intelligence Internet of Things (AIoT) devices with guarantees of high-level data privacy.
Existing FL methods suffer from both slow convergence and poor accuracy, especially in non-IID scenarios.
We propose a novel method named CyclicFL, which can quickly derive effective initial models to guide the SGD processes.
- Score: 33.250038477336425
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) has been proposed to enable distributed learning on Artificial Intelligence Internet of Things (AIoT) devices with guarantees of high-level data privacy. Since random initial models in FL can easily result in unregulated Stochastic Gradient Descent (SGD) processes, existing FL methods greatly suffer from both slow convergence and poor accuracy, especially in non-IID scenarios. To address this problem, we propose a novel method named CyclicFL, which can quickly derive effective initial models to guide the SGD processes, thus improving the overall FL training performance. We formally analyze the significance of data consistency between the pre-training and training stages of CyclicFL, showing the limited Lipschitzness of loss for the pre-trained models by CyclicFL. Moreover, we systematically prove that our method can achieve faster convergence speed under various convexity assumptions. Unlike traditional centralized pre-training methods that require public proxy data, CyclicFL pre-trains initial models on selected AIoT devices cyclically without exposing their local data. Therefore, they can be easily integrated into any security-critical FL methods. Comprehensive experimental results show that CyclicFL can not only improve the maximum classification accuracy by up to $14.11\%$ but also significantly accelerate the overall FL training process.
Related papers
- SEAFL: Enhancing Efficiency in Semi-Asynchronous Federated Learning through Adaptive Aggregation and Selective Training [26.478852701376294]
We present em SEAFL, a novel FL framework designed to mitigate both the straggler and the stale model challenges in semi-asynchronous FL.
em SEAFL dynamically assigns weights to uploaded models during aggregation based on their staleness and importance to the current global model.
We evaluate the effectiveness of em SEAFL through extensive experiments on three benchmark datasets.
arXiv Detail & Related papers (2025-02-22T05:13:53Z) - Feasible Learning [78.6167929413604]
We introduce Feasible Learning (FL), a sample-centric learning paradigm where models are trained by solving a feasibility problem that bounds the loss for each training sample.
Our empirical analysis, spanning image classification, age regression, and preference optimization in large language models, demonstrates that models trained via FL can learn from data while displaying improved tail behavior compared to ERM, with only a marginal impact on average performance.
arXiv Detail & Related papers (2025-01-24T20:39:38Z) - Stragglers-Aware Low-Latency Synchronous Federated Learning via Layer-Wise Model Updates [71.81037644563217]
Synchronous federated learning (FL) is a popular paradigm for collaborative edge learning.
As some of the devices may have limited computational resources and varying availability, FL latency is highly sensitive to stragglers.
We propose straggler-aware layer-wise federated learning (SALF) that leverages the optimization procedure of NNs via backpropagation to update the global model in a layer-wise fashion.
arXiv Detail & Related papers (2024-03-27T09:14:36Z) - AEDFL: Efficient Asynchronous Decentralized Federated Learning with
Heterogeneous Devices [61.66943750584406]
We propose an Asynchronous Efficient Decentralized FL framework, i.e., AEDFL, in heterogeneous environments.
First, we propose an asynchronous FL system model with an efficient model aggregation method for improving the FL convergence.
Second, we propose a dynamic staleness-aware model update approach to achieve superior accuracy.
Third, we propose an adaptive sparse training method to reduce communication and computation costs without significant accuracy degradation.
arXiv Detail & Related papers (2023-12-18T05:18:17Z) - Semi-Federated Learning: Convergence Analysis and Optimization of A
Hybrid Learning Framework [70.83511997272457]
We propose a semi-federated learning (SemiFL) paradigm to leverage both the base station (BS) and devices for a hybrid implementation of centralized learning (CL) and FL.
We propose a two-stage algorithm to solve this intractable problem, in which we provide the closed-form solutions to the beamformers.
arXiv Detail & Related papers (2023-10-04T03:32:39Z) - Importance of Smoothness Induced by Optimizers in FL4ASR: Towards
Understanding Federated Learning for End-to-End ASR [12.108696564200052]
We start by training End-to-End Automatic Speech Recognition (ASR) models using Federated Learning (FL)
We examine the fundamental considerations that can be pivotal in minimizing the performance gap in terms of word error rate between models trained using FL versus their centralized counterpart.
arXiv Detail & Related papers (2023-09-22T17:23:01Z) - Faster Adaptive Federated Learning [84.38913517122619]
Federated learning has attracted increasing attention with the emergence of distributed data.
In this paper, we propose an efficient adaptive algorithm (i.e., FAFED) based on momentum-based variance reduced technique in cross-silo FL.
arXiv Detail & Related papers (2022-12-02T05:07:50Z) - Depersonalized Federated Learning: Tackling Statistical Heterogeneity by
Alternating Stochastic Gradient Descent [6.394263208820851]
Federated learning (FL) enables devices to train a common machine learning (ML) model for intelligent inference without data sharing.
Raw data held by various cooperativelyicipators are always non-identically distributedly.
We propose a new FL that can significantly statistical optimize by the de-speed of this process.
arXiv Detail & Related papers (2022-10-07T10:30:39Z) - Performance Optimization for Variable Bitwidth Federated Learning in
Wireless Networks [103.22651843174471]
This paper considers improving wireless communication and computation efficiency in federated learning (FL) via model quantization.
In the proposed bitwidth FL scheme, edge devices train and transmit quantized versions of their local FL model parameters to a coordinating server, which aggregates them into a quantized global model and synchronizes the devices.
We show that the FL training process can be described as a Markov decision process and propose a model-based reinforcement learning (RL) method to optimize action selection over iterations.
arXiv Detail & Related papers (2022-09-21T08:52:51Z) - On the Importance and Applicability of Pre-Training for Federated
Learning [28.238484580662785]
We conduct a systematic study to explore pre-training for federated learning.
We find that pre-training can improve FL, but also close its accuracy gap to the counterpart centralized learning.
We conclude our paper with an attempt to understand the effect of pre-training on FL.
arXiv Detail & Related papers (2022-06-23T06:02:33Z) - Fine-tuning Global Model via Data-Free Knowledge Distillation for
Non-IID Federated Learning [86.59588262014456]
Federated Learning (FL) is an emerging distributed learning paradigm under privacy constraint.
We propose a data-free knowledge distillation method to fine-tune the global model in the server (FedFTG)
Our FedFTG significantly outperforms the state-of-the-art (SOTA) FL algorithms and can serve as a strong plugin for enhancing FedAvg, FedProx, FedDyn, and SCAFFOLD.
arXiv Detail & Related papers (2022-03-17T11:18:17Z) - Towards Federated Learning on Time-Evolving Heterogeneous Data [13.080665001587281]
Federated Learning (FL) is an emerging learning paradigm that preserves privacy by ensuring client data locality on edge devices.
Despite recent research efforts on improving the optimization of heterogeneous data, the impact of time-evolving heterogeneous data in real-world scenarios has not been well studied.
We propose Continual Federated Learning (CFL), a flexible framework, to capture the time-evolving heterogeneity of FL.
arXiv Detail & Related papers (2021-12-25T14:58:52Z) - Critical Learning Periods in Federated Learning [11.138980572551066]
Federated learning (FL) is a popular technique to train machine learning (ML) models with decentralized data.
We show that the final test accuracy of FL is dramatically affected by the early phase of the training process.
arXiv Detail & Related papers (2021-09-12T21:06:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.