Privacy-aware Berrut Approximated Coded Computing for Federated Learning
- URL: http://arxiv.org/abs/2405.01704v2
- Date: Wed, 4 Sep 2024 15:16:46 GMT
- Title: Privacy-aware Berrut Approximated Coded Computing for Federated Learning
- Authors: Xavier Martínez Luaña, Rebeca P. Díaz Redondo, Manuel Fernández Veiga,
- Abstract summary: We propose a solution to guarantee privacy in Federated Learning schemes.
Our proposal is based on the Berrut Approximated Coded Computing, adapted to a Secret Sharing configuration.
- Score: 1.2084539012992408
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL) is an interesting strategy that enables the collaborative training of an AI model among different data owners without revealing their private datasets. Even so, FL has some privacy vulnerabilities that have been tried to be overcome by applying some techniques like Differential Privacy (DP), Homomorphic Encryption (HE), or Secure Multi-Party Computation (SMPC). However, these techniques have some important drawbacks that might narrow their range of application: problems to work with non-linear functions and to operate large matrix multiplications and high communication and computational costs to manage semi-honest nodes. In this context, we propose a solution to guarantee privacy in FL schemes that simultaneously solves the previously mentioned problems. Our proposal is based on the Berrut Approximated Coded Computing, a technique from the Coded Distributed Computing paradigm, adapted to a Secret Sharing configuration, to provide input privacy to FL in a scalable way. It can be applied for computing non-linear functions and treats the special case of distributed matrix multiplication, a key primitive at the core of many automated learning tasks. Because of these characteristics, it could be applied in a wide range of FL scenarios, since it is independent of the machine learning models or aggregation algorithms used in the FL scheme. We provide analysis of the achieved privacy and complexity of our solution and, due to the extensive numerical results performed, a good trade-off between privacy and precision can be observed.
Related papers
- CorBin-FL: A Differentially Private Federated Learning Mechanism using Common Randomness [6.881974834597426]
Federated learning (FL) has emerged as a promising framework for distributed machine learning.
We introduce CorBin-FL, a privacy mechanism that uses correlated binary quantization to achieve differential privacy.
We also propose AugCorBin-FL, an extension that, in addition to PLDP, user-level and sample-level central differential privacy guarantees.
arXiv Detail & Related papers (2024-09-20T00:23:44Z) - Privacy-preserving Federated Primal-dual Learning for Non-convex and Non-smooth Problems with Model Sparsification [51.04894019092156]
Federated learning (FL) has been recognized as a rapidly growing area, where the model is trained over clients under the FL orchestration (PS)
In this paper, we propose a novel primal sparification algorithm for and guarantee non-smooth FL problems.
Its unique insightful properties and its analyses are also presented.
arXiv Detail & Related papers (2023-10-30T14:15:47Z) - UFed-GAN: A Secure Federated Learning Framework with Constrained
Computation and Unlabeled Data [50.13595312140533]
We propose a novel framework of UFed-GAN: Unsupervised Federated Generative Adversarial Network, which can capture user-side data distribution without local classification training.
Our experimental results demonstrate the strong potential of UFed-GAN in addressing limited computational resources and unlabeled data while preserving privacy.
arXiv Detail & Related papers (2023-08-10T22:52:13Z) - Over-the-Air Federated Learning In Broadband Communication [0.0]
Federated learning (FL) is a privacy-preserving distributed machine learning paradigm that operates at the wireless edge.
Some rely on secure multiparty computation, which can be vulnerable to inference attacks.
Others employ differential privacy, but this may lead to decreased test accuracy when dealing with a large number of parties contributing small amounts of data.
arXiv Detail & Related papers (2023-06-03T00:16:27Z) - On Differential Privacy for Federated Learning in Wireless Systems with
Multiple Base Stations [90.53293906751747]
We consider a federated learning model in a wireless system with multiple base stations and inter-cell interference.
We show the convergence behavior of the learning process by deriving an upper bound on its optimality gap.
Our proposed scheduler improves the average accuracy of the predictions compared with a random scheduler.
arXiv Detail & Related papers (2022-08-25T03:37:11Z) - APPFL: Open-Source Software Framework for Privacy-Preserving Federated
Learning [0.0]
Federated learning (FL) enables training models at different sites and updating the weights from the training instead of transferring data to a central location and training as in classical machine learning.
We introduce APPFL, the Argonne Privacy-Preserving Federated Learning framework.
APPFL allows users to leverage implemented privacy-preserving algorithms, implement new algorithms, and simulate and deploy various FL algorithms with privacy-preserving techniques.
arXiv Detail & Related papers (2022-02-08T06:23:05Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - Private Multi-Task Learning: Formulation and Applications to Federated
Learning [44.60519521554582]
Multi-task learning is relevant for privacy-sensitive applications in areas such as healthcare, finance, and IoT computing.
We formalize notions of client-level privacy for MTL via joint differential privacy (JDP), a relaxation of differential privacy for mechanism design and distributed optimization.
We then propose an algorithm for mean-regularized MTL, an objective commonly used for applications in personalized federated learning.
arXiv Detail & Related papers (2021-08-30T03:37:36Z) - Sensitivity analysis in differentially private machine learning using
hybrid automatic differentiation [54.88777449903538]
We introduce a novel textithybrid automatic differentiation (AD) system for sensitivity analysis.
This enables modelling the sensitivity of arbitrary differentiable function compositions, such as the training of neural networks on private data.
Our approach can enable the principled reasoning about privacy loss in the setting of data processing.
arXiv Detail & Related papers (2021-07-09T07:19:23Z) - A Graph Federated Architecture with Privacy Preserving Learning [48.24121036612076]
Federated learning involves a central processor that works with multiple agents to find a global model.
The current architecture of a server connected to multiple clients is highly sensitive to communication failures and computational overloads at the server.
We use cryptographic and differential privacy concepts to privatize the federated learning algorithm that we extend to the graph structure.
arXiv Detail & Related papers (2021-04-26T09:51:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.