Communication-Efficient Federated Learning with Binary Neural Networks
- URL: http://arxiv.org/abs/2110.02226v1
- Date: Tue, 5 Oct 2021 15:59:49 GMT
- Title: Communication-Efficient Federated Learning with Binary Neural Networks
- Authors: Yuzhi Yang, Zhaoyang Zhang and Qianqian Yang
- Abstract summary: Federated learning (FL) is a privacy-preserving machine learning setting.
FL involves a frequent exchange of the parameters between all the clients and the server that coordinates the training.
In this paper, we consider training the binary neural networks (BNN) in the FL setting instead of the typical real-valued neural networks.
- Score: 15.614120327271557
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning (FL) is a privacy-preserving machine learning setting that
enables many devices to jointly train a shared global model without the need to
reveal their data to a central server. However, FL involves a frequent exchange
of the parameters between all the clients and the server that coordinates the
training. This introduces extensive communication overhead, which can be a
major bottleneck in FL with limited communication links. In this paper, we
consider training the binary neural networks (BNN) in the FL setting instead of
the typical real-valued neural networks to fulfill the stringent delay and
efficiency requirement in wireless edge networks. We introduce a novel FL
framework of training BNN, where the clients only upload the binary parameters
to the server. We also propose a novel parameter updating scheme based on the
Maximum Likelihood (ML) estimation that preserves the performance of the BNN
even without the availability of aggregated real-valued auxiliary parameters
that are usually needed during the training of the BNN. Moreover, for the first
time in the literature, we theoretically derive the conditions under which the
training of BNN is converging. { Numerical results show that the proposed FL
framework significantly reduces the communication cost compared to the
conventional neural networks with typical real-valued parameters, and the
performance loss incurred by the binarization can be further compensated by a
hybrid method.
Related papers
- The Robustness of Spiking Neural Networks in Communication and its Application towards Network Efficiency in Federated Learning [6.9569682335746235]
Spiking Neural Networks (SNNs) have recently gained significant interest in on-chip learning in embedded devices.
In this paper, we explore the inherent robustness of SNNs under noisy communication in Federated Learning.
We propose a novel Federated Learning with TopK Sparsification algorithm to reduce the bandwidth usage for FL training.
arXiv Detail & Related papers (2024-09-19T13:37:18Z) - SpaFL: Communication-Efficient Federated Learning with Sparse Models and Low computational Overhead [75.87007729801304]
SpaFL: a communication-efficient FL framework is proposed to optimize sparse model structures with low computational overhead.
Experiments show that SpaFL improves accuracy while requiring much less communication and computing resources compared to sparse baselines.
arXiv Detail & Related papers (2024-06-01T13:10:35Z) - Asymmetrical estimator for training encapsulated deep photonic neural networks [10.709758849326061]
asymmetrical training (AT) is a BP-based training method that can perform training on an encapsulated deep network.
AT offers significantly improved time and energy efficiency compared to existing BP-PNN methods.
We demonstrate AT's error-tolerant and calibration-free training for encapsulated integrated photonic deep networks.
arXiv Detail & Related papers (2024-05-28T17:27:20Z) - FLrce: Resource-Efficient Federated Learning with Early-Stopping Strategy [7.963276533979389]
Federated Learning (FL) achieves great popularity in the Internet of Things (IoT)
We present FLrce, an efficient FL framework with a relationship-based client selection and early-stopping strategy.
Experiment results show that, compared with existing efficient FL frameworks, FLrce improves the computation and communication efficiency by at least 30% and 43% respectively.
arXiv Detail & Related papers (2023-10-15T10:13:44Z) - SlimFL: Federated Learning with Superposition Coding over Slimmable
Neural Networks [56.68149211499535]
Federated learning (FL) is a key enabler for efficient communication and computing leveraging devices' distributed computing capabilities.
This paper proposes a novel learning framework by integrating FL and width-adjustable slimmable neural networks (SNNs)
We propose a communication and energy-efficient SNN-based FL (named SlimFL) that jointly utilizes superposition coding (SC) for global model aggregation and superposition training (ST) for updating local models.
arXiv Detail & Related papers (2022-03-26T15:06:13Z) - Joint Superposition Coding and Training for Federated Learning over
Multi-Width Neural Networks [52.93232352968347]
This paper aims to integrate two synergetic technologies, federated learning (FL) and width-adjustable slimmable neural network (SNN)
FL preserves data privacy by exchanging the locally trained models of mobile devices. SNNs are however non-trivial, particularly under wireless connections with time-varying channel conditions.
We propose a communication and energy-efficient SNN-based FL (named SlimFL) that jointly utilizes superposition coding (SC) for global model aggregation and superposition training (ST) for updating local models.
arXiv Detail & Related papers (2021-12-05T11:17:17Z) - On the Tradeoff between Energy, Precision, and Accuracy in Federated
Quantized Neural Networks [68.52621234990728]
Federated learning (FL) over wireless networks requires balancing between accuracy, energy efficiency, and precision.
We propose a quantized FL framework that represents data with a finite level of precision in both local training and uplink transmission.
Our framework can reduce energy consumption by up to 53% compared to a standard FL model.
arXiv Detail & Related papers (2021-11-15T17:00:03Z) - Learning to Solve the AC-OPF using Sensitivity-Informed Deep Neural
Networks [52.32646357164739]
We propose a deep neural network (DNN) to solve the solutions of the optimal power flow (ACOPF)
The proposed SIDNN is compatible with a broad range of OPF schemes.
It can be seamlessly integrated in other learning-to-OPF schemes.
arXiv Detail & Related papers (2021-03-27T00:45:23Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.