Optimal Privacy Preserving for Federated Learning in Mobile Edge
Computing
- URL: http://arxiv.org/abs/2211.07166v2
- Date: Sun, 21 May 2023 01:52:58 GMT
- Title: Optimal Privacy Preserving for Federated Learning in Mobile Edge
Computing
- Authors: Hai M. Nguyen, Nam H. Chu, Diep N. Nguyen, Dinh Thai Hoang, Van-Dinh
Nguyen, Minh Hoang Ha, Eryk Dutkiewicz, and Marwan Krunz
- Abstract summary: Federated Learning (FL) with quantization and deliberately added noise over wireless networks is a promising approach to preserve user differential privacy (DP)
This article aims to jointly optimize the quantization and Binomial mechanism parameters and communication resources to maximize the convergence rate under the constraints of the wireless network and DP requirement.
- Score: 35.57643489979182
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated Learning (FL) with quantization and deliberately added noise over
wireless networks is a promising approach to preserve user differential privacy
(DP) while reducing wireless resources. Specifically, an FL process can be
fused with quantized Binomial mechanism-based updates contributed by multiple
users. However, optimizing quantization parameters, communication resources
(e.g., transmit power, bandwidth, and quantization bits), and the added noise
to guarantee the DP requirement and performance of the learned FL model remains
an open and challenging problem. This article aims to jointly optimize the
quantization and Binomial mechanism parameters and communication resources to
maximize the convergence rate under the constraints of the wireless network and
DP requirement. To that end, we first derive a novel DP budget estimation of
the FL with quantization/noise that is tighter than the state-of-the-art bound.
We then provide a theoretical bound on the convergence rate. This theoretical
bound is decomposed into two components, including the variance of the global
gradient and the quadratic bias that can be minimized by optimizing the
communication resources, and quantization/noise parameters. The resulting
optimization turns out to be a Mixed-Integer Non-linear Programming (MINLP)
problem. To tackle it, we first transform this MINLP problem into a new problem
whose solutions are proved to be the optimal solutions of the original one. We
then propose an approximate algorithm to solve the transformed problem with an
arbitrary relative error guarantee. Extensive simulations show that under the
same wireless resource constraints and DP protection requirements, the proposed
approximate algorithm achieves an accuracy close to the accuracy of the
conventional FL without quantization/noise. The results can achieve a higher
convergence rate while preserving users' privacy.
Related papers
- Resource Management for Low-latency Cooperative Fine-tuning of Foundation Models at the Network Edge [35.40849522296486]
Large-scale foundation models (FoMos) can perform human-like intelligence.
FoMos need to be adapted to specialized downstream tasks through fine-tuning techniques.
We advocate multi-device cooperation within the device-edge cooperative fine-tuning paradigm.
arXiv Detail & Related papers (2024-07-13T12:47:14Z) - Gradient Sparsification for Efficient Wireless Federated Learning with
Differential Privacy [25.763777765222358]
Federated learning (FL) enables distributed clients to collaboratively train a machine learning model without sharing raw data with each other.
As the model size grows, the training latency due to limited transmission bandwidth and private information degrades while using differential privacy (DP) protection.
We propose sparsification empowered FL framework wireless channels, in over to improve training efficiency without sacrificing convergence performance.
arXiv Detail & Related papers (2023-04-09T05:21:15Z) - Performance Optimization for Variable Bitwidth Federated Learning in
Wireless Networks [103.22651843174471]
This paper considers improving wireless communication and computation efficiency in federated learning (FL) via model quantization.
In the proposed bitwidth FL scheme, edge devices train and transmit quantized versions of their local FL model parameters to a coordinating server, which aggregates them into a quantized global model and synchronizes the devices.
We show that the FL training process can be described as a Markov decision process and propose a model-based reinforcement learning (RL) method to optimize action selection over iterations.
arXiv Detail & Related papers (2022-09-21T08:52:51Z) - Green, Quantized Federated Learning over Wireless Networks: An
Energy-Efficient Design [68.86220939532373]
The finite precision level is captured through the use of quantized neural networks (QNNs) that quantize weights and activations in fixed-precision format.
The proposed FL framework can reduce energy consumption until convergence by up to 70% compared to a baseline FL algorithm.
arXiv Detail & Related papers (2022-07-19T16:37:24Z) - Federated Learning for Energy-limited Wireless Networks: A Partial Model
Aggregation Approach [79.59560136273917]
limited communication resources, bandwidth and energy, and data heterogeneity across devices are main bottlenecks for federated learning (FL)
We first devise a novel FL framework with partial model aggregation (PMA)
The proposed PMA-FL improves 2.72% and 11.6% accuracy on two typical heterogeneous datasets.
arXiv Detail & Related papers (2022-04-20T19:09:52Z) - Over-the-Air Federated Learning via Second-Order Optimization [37.594140209854906]
Federated learning (FL) could result in task-oriented data traffic flows over wireless networks with limited radio resources.
We propose a novel over-the-air second-order federated optimization algorithm to simultaneously reduce the communication rounds and enable low-latency global model aggregation.
arXiv Detail & Related papers (2022-03-29T12:39:23Z) - Wireless Quantized Federated Learning: A Joint Computation and
Communication Design [36.35684767732552]
In this paper, we aim to minimize the total convergence time of FL, by quantizing the local model parameters prior to uplink transmission.
We jointly optimize the computing, communication resources and number of quantization bits, in order to guarantee minimized convergence time across all global rounds.
arXiv Detail & Related papers (2022-03-11T12:30:08Z) - Low-Latency Federated Learning over Wireless Channels with Differential
Privacy [142.5983499872664]
In federated learning (FL), model training is distributed over clients and local models are aggregated by a central server.
In this paper, we aim to minimize FL training delay over wireless channels, constrained by overall training performance as well as each client's differential privacy (DP) requirement.
arXiv Detail & Related papers (2021-06-20T13:51:18Z) - Adaptive Subcarrier, Parameter, and Power Allocation for Partitioned
Edge Learning Over Broadband Channels [69.18343801164741]
partitioned edge learning (PARTEL) implements parameter-server training, a well known distributed learning method, in wireless network.
We consider the case of deep neural network (DNN) models which can be trained using PARTEL by introducing some auxiliary variables.
arXiv Detail & Related papers (2020-10-08T15:27:50Z) - Communication-Efficient Distributed Stochastic AUC Maximization with
Deep Neural Networks [50.42141893913188]
We study a distributed variable for large-scale AUC for a neural network as with a deep neural network.
Our model requires a much less number of communication rounds and still a number of communication rounds in theory.
Our experiments on several datasets show the effectiveness of our theory and also confirm our theory.
arXiv Detail & Related papers (2020-05-05T18:08:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.