DYNAMITE: Dynamic Interplay of Mini-Batch Size and Aggregation Frequency
for Federated Learning with Static and Streaming Dataset
- URL: http://arxiv.org/abs/2310.14906v1
- Date: Fri, 20 Oct 2023 08:36:12 GMT
- Title: DYNAMITE: Dynamic Interplay of Mini-Batch Size and Aggregation Frequency
for Federated Learning with Static and Streaming Dataset
- Authors: Weijie Liu, Xiaoxi Zhang, Jingpu Duan, Carlee Joe-Wong, Zhi Zhou, and
Xu Chen
- Abstract summary: Federated Learning (FL) is a distributed learning paradigm that can coordinate heterogeneous edge devices to perform model training without sharing private data.
This paper introduces novel analytical models and optimization algorithms that leverage the interplay between batch size and aggregation frequency to navigate the trade-offs among convergence, cost, and completion time for dynamic FL training.
- Score: 23.11152686493894
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL) is a distributed learning paradigm that can
coordinate heterogeneous edge devices to perform model training without sharing
private data. While prior works have focused on analyzing FL convergence with
respect to hyperparameters like batch size and aggregation frequency, the joint
effects of adjusting these parameters on model performance, training time, and
resource consumption have been overlooked, especially when facing dynamic data
streams and network characteristics. This paper introduces novel analytical
models and optimization algorithms that leverage the interplay between batch
size and aggregation frequency to navigate the trade-offs among convergence,
cost, and completion time for dynamic FL training. We establish a new
convergence bound for training error considering heterogeneous datasets across
devices and derive closed-form solutions for co-optimized batch size and
aggregation frequency that are consistent across all devices. Additionally, we
design an efficient algorithm for assigning different batch configurations
across devices, improving model accuracy and addressing the heterogeneity of
both data and system characteristics. Further, we propose an adaptive control
algorithm that dynamically estimates network states, efficiently samples
appropriate data batches, and effectively adjusts batch sizes and aggregation
frequency on the fly. Extensive experiments demonstrate the superiority of our
offline optimal solutions and online adaptive algorithm.
Related papers
- Over-the-Air Federated Learning and Optimization [52.5188988624998]
We focus on Federated learning (FL) via edge-the-air computation (AirComp)
We describe the convergence of AirComp-based FedAvg (AirFedAvg) algorithms under both convex and non- convex settings.
For different types of local updates that can be transmitted by edge devices (i.e., model, gradient, model difference), we reveal that transmitting in AirFedAvg may cause an aggregation error.
In addition, we consider more practical signal processing schemes to improve the communication efficiency and extend the convergence analysis to different forms of model aggregation error caused by these signal processing schemes.
arXiv Detail & Related papers (2023-10-16T05:49:28Z) - Adaptive Model Pruning and Personalization for Federated Learning over
Wireless Networks [72.59891661768177]
Federated learning (FL) enables distributed learning across edge devices while protecting data privacy.
We consider a FL framework with partial model pruning and personalization to overcome these challenges.
This framework splits the learning model into a global part with model pruning shared with all devices to learn data representations and a personalized part to be fine-tuned for a specific device.
arXiv Detail & Related papers (2023-09-04T21:10:45Z) - Analysis and Optimization of Wireless Federated Learning with Data
Heterogeneity [72.85248553787538]
This paper focuses on performance analysis and optimization for wireless FL, considering data heterogeneity, combined with wireless resource allocation.
We formulate the loss function minimization problem, under constraints on long-term energy consumption and latency, and jointly optimize client scheduling, resource allocation, and the number of local training epochs (CRE)
Experiments on real-world datasets demonstrate that the proposed algorithm outperforms other benchmarks in terms of the learning accuracy and energy consumption.
arXiv Detail & Related papers (2023-08-04T04:18:01Z) - FedDCT: A Dynamic Cross-Tier Federated Learning Framework in Wireless Networks [5.914766366715661]
Federated Learning (FL) trains a global model across devices without exposing local data.
resource heterogeneity and inevitable stragglers in wireless networks severely impact the efficiency and accuracy of FL training.
We propose a novel Dynamic Cross-Tier Federated Learning framework (FedDCT)
arXiv Detail & Related papers (2023-07-10T08:54:07Z) - Asynchronous Multi-Model Dynamic Federated Learning over Wireless
Networks: Theory, Modeling, and Optimization [20.741776617129208]
Federated learning (FL) has emerged as a key technique for distributed machine learning (ML)
We first formulate rectangular scheduling steps and functions to capture the impact of system parameters on learning performance.
Our analysis sheds light on the joint impact of device training variables and asynchronous scheduling decisions.
arXiv Detail & Related papers (2023-05-22T21:39:38Z) - Vertical Federated Learning over Cloud-RAN: Convergence Analysis and
System Optimization [82.12796238714589]
We propose a novel cloud radio access network (Cloud-RAN) based vertical FL system to enable fast and accurate model aggregation.
We characterize the convergence behavior of the vertical FL algorithm considering both uplink and downlink transmissions.
We establish a system optimization framework by joint transceiver and fronthaul quantization design, for which successive convex approximation and alternate convex search based system optimization algorithms are developed.
arXiv Detail & Related papers (2023-05-04T09:26:03Z) - Faster Adaptive Federated Learning [84.38913517122619]
Federated learning has attracted increasing attention with the emergence of distributed data.
In this paper, we propose an efficient adaptive algorithm (i.e., FAFED) based on momentum-based variance reduced technique in cross-silo FL.
arXiv Detail & Related papers (2022-12-02T05:07:50Z) - FedGPO: Heterogeneity-Aware Global Parameter Optimization for Efficient
Federated Learning [11.093360539563657]
Federated learning (FL) has emerged as a solution to deal with the risk of privacy leaks in machine learning training.
We propose FedGPO to optimize the energy-efficiency of FL use cases while guaranteeing model convergence.
In our experiments, FedGPO improves the model convergence time by 2.4 times, and achieves 3.6 times higher energy efficiency over the baseline settings.
arXiv Detail & Related papers (2022-11-30T01:22:57Z) - Dynamic Network-Assisted D2D-Aided Coded Distributed Learning [59.29409589861241]
We propose a novel device-to-device (D2D)-aided coded federated learning method (D2D-CFL) for load balancing across devices.
We derive an optimal compression rate for achieving minimum processing time and establish its connection with the convergence time.
Our proposed method is beneficial for real-time collaborative applications, where the users continuously generate training data.
arXiv Detail & Related papers (2021-11-26T18:44:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.