1-Bit Compressive Sensing for Efficient Federated Learning Over the Air
- URL: http://arxiv.org/abs/2103.16055v1
- Date: Tue, 30 Mar 2021 03:50:31 GMT
- Title: 1-Bit Compressive Sensing for Efficient Federated Learning Over the Air
- Authors: Xin Fan, Yue Wang, Yan Huo, and Zhi Tian
- Abstract summary: This paper develops and analyzes a communication-efficient scheme for learning (FL) over the air, which incorporates 1-bit sensing (CS) into analog aggregation transmissions.
For scalable computing, we develop an efficient implementation that is suitable for large-scale networks.
Simulation results show that our proposed 1-bit CS based FL over the air achieves comparable performance to the ideal case.
- Score: 32.14738452396869
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: For distributed learning among collaborative users, this paper develops and
analyzes a communication-efficient scheme for federated learning (FL) over the
air, which incorporates 1-bit compressive sensing (CS) into analog aggregation
transmissions. To facilitate design parameter optimization, we theoretically
analyze the efficacy of the proposed scheme by deriving a closed-form
expression for the expected convergence rate of the FL over the air. Our
theoretical results reveal the tradeoff between convergence performance and
communication efficiency as a result of the aggregation errors caused by
sparsification, dimension reduction, quantization, signal reconstruction and
noise. Then, we formulate 1-bit CS based FL over the air as a joint
optimization problem to mitigate the impact of these aggregation errors through
joint optimal design of worker scheduling and power scaling policy. An
enumeration-based method is proposed to solve this non-convex problem, which is
optimal but becomes computationally infeasible as the number of devices
increases. For scalable computing, we resort to the alternating direction
method of multipliers (ADMM) technique to develop an efficient implementation
that is suitable for large-scale networks. Simulation results show that our
proposed 1-bit CS based FL over the air achieves comparable performance to the
ideal case where conventional FL without compression and quantification is
applied over error-free aggregation, at much reduced communication overhead and
transmission latency.
Related papers
- Over-the-Air Federated Learning via Weighted Aggregation [9.043019524847491]
This paper introduces a new federated learning scheme that leverages over-the-air computation.
A novel feature of this scheme is the proposal to employ adaptive weights during aggregation.
We provide a mathematical methodology to derive the convergence bound for the proposed scheme.
arXiv Detail & Related papers (2024-09-12T08:07:11Z) - Over-the-Air Federated Learning and Optimization [52.5188988624998]
We focus on Federated learning (FL) via edge-the-air computation (AirComp)
We describe the convergence of AirComp-based FedAvg (AirFedAvg) algorithms under both convex and non- convex settings.
For different types of local updates that can be transmitted by edge devices (i.e., model, gradient, model difference), we reveal that transmitting in AirFedAvg may cause an aggregation error.
In addition, we consider more practical signal processing schemes to improve the communication efficiency and extend the convergence analysis to different forms of model aggregation error caused by these signal processing schemes.
arXiv Detail & Related papers (2023-10-16T05:49:28Z) - Semi-Federated Learning: Convergence Analysis and Optimization of A
Hybrid Learning Framework [70.83511997272457]
We propose a semi-federated learning (SemiFL) paradigm to leverage both the base station (BS) and devices for a hybrid implementation of centralized learning (CL) and FL.
We propose a two-stage algorithm to solve this intractable problem, in which we provide the closed-form solutions to the beamformers.
arXiv Detail & Related papers (2023-10-04T03:32:39Z) - Vertical Federated Learning over Cloud-RAN: Convergence Analysis and
System Optimization [82.12796238714589]
We propose a novel cloud radio access network (Cloud-RAN) based vertical FL system to enable fast and accurate model aggregation.
We characterize the convergence behavior of the vertical FL algorithm considering both uplink and downlink transmissions.
We establish a system optimization framework by joint transceiver and fronthaul quantization design, for which successive convex approximation and alternate convex search based system optimization algorithms are developed.
arXiv Detail & Related papers (2023-05-04T09:26:03Z) - Faster Adaptive Federated Learning [84.38913517122619]
Federated learning has attracted increasing attention with the emergence of distributed data.
In this paper, we propose an efficient adaptive algorithm (i.e., FAFED) based on momentum-based variance reduced technique in cross-silo FL.
arXiv Detail & Related papers (2022-12-02T05:07:50Z) - Performance Optimization for Variable Bitwidth Federated Learning in
Wireless Networks [103.22651843174471]
This paper considers improving wireless communication and computation efficiency in federated learning (FL) via model quantization.
In the proposed bitwidth FL scheme, edge devices train and transmit quantized versions of their local FL model parameters to a coordinating server, which aggregates them into a quantized global model and synchronizes the devices.
We show that the FL training process can be described as a Markov decision process and propose a model-based reinforcement learning (RL) method to optimize action selection over iterations.
arXiv Detail & Related papers (2022-09-21T08:52:51Z) - Predictive GAN-powered Multi-Objective Optimization for Hybrid Federated
Split Learning [56.125720497163684]
We propose a hybrid federated split learning framework in wireless networks.
We design a parallel computing scheme for model splitting without label sharing, and theoretically analyze the influence of the delayed gradient caused by the scheme on the convergence speed.
arXiv Detail & Related papers (2022-09-02T10:29:56Z) - Resource Allocation for Compression-aided Federated Learning with High
Distortion Rate [3.7530276852356645]
We formulate an optimization-aided FL problem between the distortion rate, number of participating IoT devices, and convergence rate.
By actively controlling participating IoT devices, we can avoid the training divergence of compression-aided FL while maintaining the communication efficiency.
arXiv Detail & Related papers (2022-06-02T05:00:37Z) - Resource-Efficient and Delay-Aware Federated Learning Design under Edge
Heterogeneity [10.702853653891902]
Federated learning (FL) has emerged as a popular methodology for distributing machine learning across wireless edge devices.
In this work, we consider optimizing the tradeoff between model performance and resource utilization in FL.
Our proposed StoFedDelAv incorporates a localglobal model combiner into the FL computation step.
arXiv Detail & Related papers (2021-12-27T22:30:15Z) - Joint Optimization of Communications and Federated Learning Over the Air [32.14738452396869]
Federated learning (FL) is an attractive paradigm for making use of rich distributed data while protecting data privacy.
In this paper, we study joint optimization of communications and FL based on analog aggregation transmission in realistic wireless networks.
arXiv Detail & Related papers (2021-04-08T03:38:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.