Resource Allocation for Compression-aided Federated Learning with High
Distortion Rate
- URL: http://arxiv.org/abs/2206.06976v1
- Date: Thu, 2 Jun 2022 05:00:37 GMT
- Title: Resource Allocation for Compression-aided Federated Learning with High
Distortion Rate
- Authors: Xuan-Tung Nguyen, Minh-Duong Nguyen, Quoc-Viet Pham, Vinh-Quang Do,
Won-Joo Hwang
- Abstract summary: We formulate an optimization-aided FL problem between the distortion rate, number of participating IoT devices, and convergence rate.
By actively controlling participating IoT devices, we can avoid the training divergence of compression-aided FL while maintaining the communication efficiency.
- Score: 3.7530276852356645
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, a considerable amount of works have been made to tackle the
communication burden in federated learning (FL) (e.g., model quantization, data
sparsification, and model compression). However, the existing methods, that
boost the communication efficiency in FL, result in a considerable trade-off
between communication efficiency and global convergence rate. We formulate an
optimization problem for compression-aided FL, which captures the relationship
between the distortion rate, number of participating IoT devices, and
convergence rate. Following that, the objective function is to minimize the
total transmission time for FL convergence. Because the problem is non-convex,
we propose to decompose it into sub-problems. Based on the property of a FL
model, we first determine the number of IoT devices participating in the FL
process. Then, the communication between IoT devices and the server is
optimized by efficiently allocating wireless resources based on a coalition
game. Our theoretical analysis shows that, by actively controlling the number
of participating IoT devices, we can avoid the training divergence of
compression-aided FL while maintaining the communication efficiency.
Related papers
- Digital Twin-Assisted Federated Learning with Blockchain in Multi-tier Computing Systems [67.14406100332671]
In Industry 4.0 systems, resource-constrained edge devices engage in frequent data interactions.
This paper proposes a digital twin (DT) and federated digital twin (FL) scheme.
The efficacy of our proposed cooperative interference-based FL process has been verified through numerical analysis.
arXiv Detail & Related papers (2024-11-04T17:48:02Z) - Adaptive Model Pruning and Personalization for Federated Learning over
Wireless Networks [72.59891661768177]
Federated learning (FL) enables distributed learning across edge devices while protecting data privacy.
We consider a FL framework with partial model pruning and personalization to overcome these challenges.
This framework splits the learning model into a global part with model pruning shared with all devices to learn data representations and a personalized part to be fine-tuned for a specific device.
arXiv Detail & Related papers (2023-09-04T21:10:45Z) - Scheduling and Aggregation Design for Asynchronous Federated Learning
over Wireless Networks [56.91063444859008]
Federated Learning (FL) is a collaborative machine learning framework that combines on-device training and server-based aggregation.
We propose an asynchronous FL design with periodic aggregation to tackle the straggler issue in FL systems.
We show that an age-aware'' aggregation weighting design can significantly improve the learning performance in an asynchronous FL setting.
arXiv Detail & Related papers (2022-12-14T17:33:01Z) - Performance Optimization for Variable Bitwidth Federated Learning in
Wireless Networks [103.22651843174471]
This paper considers improving wireless communication and computation efficiency in federated learning (FL) via model quantization.
In the proposed bitwidth FL scheme, edge devices train and transmit quantized versions of their local FL model parameters to a coordinating server, which aggregates them into a quantized global model and synchronizes the devices.
We show that the FL training process can be described as a Markov decision process and propose a model-based reinforcement learning (RL) method to optimize action selection over iterations.
arXiv Detail & Related papers (2022-09-21T08:52:51Z) - CFLIT: Coexisting Federated Learning and Information Transfer [18.30671838758503]
We study the coexistence of over-the-air FL and traditional information transfer (IT) in a mobile edge network.
We propose a coexisting federated learning and information transfer (CFLIT) communication framework, where the FL and IT devices share the wireless spectrum in an OFDM system.
arXiv Detail & Related papers (2022-07-26T13:17:28Z) - HCFL: A High Compression Approach for Communication-Efficient Federated
Learning in Very Large Scale IoT Networks [27.963991995365532]
Federated learning (FL) is a new artificial intelligence concept that enables Internet-of-Things (IoT) devices to learn a collaborative model without sending the raw data to centralized nodes for processing.
Despite numerous advantages, low computing resources at IoT devices and high communication costs for exchanging model parameters make applications of FL in massive IoT networks very limited.
We develop a novel compression scheme for FL, called high-compression federated learning (HCFL), for very large scale IoT networks.
arXiv Detail & Related papers (2022-04-14T05:29:40Z) - Over-the-Air Federated Learning with Retransmissions (Extended Version) [21.37147806100865]
We study the impact of estimation errors on the convergence of Federated Learning (FL) over resource-constrained wireless networks.
We propose retransmissions as a method to improve FL convergence over resource-constrained wireless networks.
arXiv Detail & Related papers (2021-11-19T15:17:15Z) - Federated Learning over Wireless IoT Networks with Optimized
Communication and Resources [98.18365881575805]
Federated learning (FL) as a paradigm of collaborative learning techniques has obtained increasing research attention.
It is of interest to investigate fast responding and accurate FL schemes over wireless systems.
We show that the proposed communication-efficient federated learning framework converges at a strong linear rate.
arXiv Detail & Related papers (2021-10-22T13:25:57Z) - 1-Bit Compressive Sensing for Efficient Federated Learning Over the Air [32.14738452396869]
This paper develops and analyzes a communication-efficient scheme for learning (FL) over the air, which incorporates 1-bit sensing (CS) into analog aggregation transmissions.
For scalable computing, we develop an efficient implementation that is suitable for large-scale networks.
Simulation results show that our proposed 1-bit CS based FL over the air achieves comparable performance to the ideal case.
arXiv Detail & Related papers (2021-03-30T03:50:31Z) - Delay Minimization for Federated Learning Over Wireless Communication
Networks [172.42768672943365]
The problem of delay computation for federated learning (FL) over wireless communication networks is investigated.
A bisection search algorithm is proposed to obtain the optimal solution.
Simulation results show that the proposed algorithm can reduce delay by up to 27.3% compared to conventional FL methods.
arXiv Detail & Related papers (2020-07-05T19:00:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.