SABLE: Secure And Byzantine robust LEarning
- URL: http://arxiv.org/abs/2309.05395v4
- Date: Thu, 14 Dec 2023 10:04:33 GMT
- Title: SABLE: Secure And Byzantine robust LEarning
- Authors: Antoine Choffrut, Rachid Guerraoui, Rafael Pinot, Renaud Sirdey, John
Stephan, and Martin Zuber
- Abstract summary: Homomorphic encryption (HE) has emerged as a leading security measure to preserve privacy in distributed learning.
This paper introduces SABLE, the first homomorphic and Byzantine robust distributed learning algorithm.
- Score: 9.455980760111498
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Due to the widespread availability of data, machine learning (ML) algorithms
are increasingly being implemented in distributed topologies, wherein various
nodes collaborate to train ML models via the coordination of a central server.
However, distributed learning approaches face significant vulnerabilities,
primarily stemming from two potential threats. Firstly, the presence of
Byzantine nodes poses a risk of corrupting the learning process by transmitting
inaccurate information to the server. Secondly, a curious server may compromise
the privacy of individual nodes, sometimes reconstructing the entirety of the
nodes' data. Homomorphic encryption (HE) has emerged as a leading security
measure to preserve privacy in distributed learning under non-Byzantine
scenarios. However, the extensive computational demands of HE, particularly for
high-dimensional ML models, have deterred attempts to design purely homomorphic
operators for non-linear robust aggregators. This paper introduces SABLE, the
first homomorphic and Byzantine robust distributed learning algorithm. SABLE
leverages HTS, a novel and efficient homomorphic operator implementing the
prominent coordinate-wise trimmed mean robust aggregator. Designing HTS enables
us to implement HMED, a novel homomorphic median aggregator. Extensive
experiments on standard ML tasks demonstrate that SABLE achieves practical
execution times while maintaining an ML accuracy comparable to its non-private
counterpart.
Related papers
- Lancelot: Towards Efficient and Privacy-Preserving Byzantine-Robust Federated Learning within Fully Homomorphic Encryption [10.685816010576918]
We propose Lancelot, an innovative and computationally efficient BRFL framework that employs fully homomorphic encryption (FHE) to safeguard against malicious client activities while preserving data privacy.
Our extensive testing, which includes medical imaging diagnostics and widely-used public image datasets, demonstrates that Lancelot significantly outperforms existing methods, offering more than a twenty-fold increase in processing speed, all while maintaining data privacy.
arXiv Detail & Related papers (2024-08-12T14:48:25Z) - PriRoAgg: Achieving Robust Model Aggregation with Minimum Privacy Leakage for Federated Learning [49.916365792036636]
Federated learning (FL) has recently gained significant momentum due to its potential to leverage large-scale distributed user data.
The transmitted model updates can potentially leak sensitive user information, and the lack of central control of the local training process leaves the global model susceptible to malicious manipulations on model updates.
We develop a general framework PriRoAgg, utilizing Lagrange coded computing and distributed zero-knowledge proof, to execute a wide range of robust aggregation algorithms while satisfying aggregated privacy.
arXiv Detail & Related papers (2024-07-12T03:18:08Z) - When approximate design for fast homomorphic computation provides
differential privacy guarantees [0.08399688944263842]
Differential privacy (DP) and cryptographic primitives are popular countermeasures against privacy attacks.
In this paper, we design SHIELD, a probabilistic approximation algorithm for the argmax operator.
Even if SHIELD could have other applications, we here focus on one setting and seamlessly integrate it in the SPEED collaborative training framework.
arXiv Detail & Related papers (2023-04-06T09:38:01Z) - HE-MAN -- Homomorphically Encrypted MAchine learning with oNnx models [0.23624125155742057]
homomorphic encryption (FHE) is a promising technique to enable individuals using ML services without giving up privacy.
We introduce HE-MAN, an open-source machine learning toolset for privacy preserving inference with ONNX models and homomorphically encrypted data.
Compared to prior work, HE-MAN supports a broad range of ML models in ONNX format out of the box without sacrificing accuracy.
arXiv Detail & Related papers (2023-02-16T12:37:14Z) - Semi-supervised Domain Adaptive Structure Learning [72.01544419893628]
Semi-supervised domain adaptation (SSDA) is a challenging problem requiring methods to overcome both 1) overfitting towards poorly annotated data and 2) distribution shift across domains.
We introduce an adaptive structure learning method to regularize the cooperation of SSL and DA.
arXiv Detail & Related papers (2021-12-12T06:11:16Z) - Coded Stochastic ADMM for Decentralized Consensus Optimization with Edge
Computing [113.52575069030192]
Big data, including applications with high security requirements, are often collected and stored on multiple heterogeneous devices, such as mobile devices, drones and vehicles.
Due to the limitations of communication costs and security requirements, it is of paramount importance to extract information in a decentralized manner instead of aggregating data to a fusion center.
We consider the problem of learning model parameters in a multi-agent system with data locally processed via distributed edge nodes.
A class of mini-batch alternating direction method of multipliers (ADMM) algorithms is explored to develop the distributed learning model.
arXiv Detail & Related papers (2020-10-02T10:41:59Z) - Holdout SGD: Byzantine Tolerant Federated Learning [43.446891082719944]
This work presents a new distributed Byzantine tolerant federated learning algorithm, HoldOut SGD, for Gradient Descent (SGD) optimization.
HoldOut SGD uses the well known machine learning technique of holdout estimation, in a distributed fashion, in order to select parameter updates that are likely to lead to models with low loss values.
We provide formal guarantees for the HoldOut SGD process in terms of its convergence to the optimal model, and its level of resilience to the fraction of Byzantine workers.
arXiv Detail & Related papers (2020-08-11T10:16:37Z) - Byzantine-Robust Learning on Heterogeneous Datasets via Bucketing [55.012801269326594]
In Byzantine robust distributed learning, a central server wants to train a machine learning model over data distributed across multiple workers.
A fraction of these workers may deviate from the prescribed algorithm and send arbitrary messages.
We propose a simple bucketing scheme that adapts existing robust algorithms to heterogeneous datasets at a negligible computational cost.
arXiv Detail & Related papers (2020-06-16T17:58:53Z) - Diversity inducing Information Bottleneck in Model Ensembles [73.80615604822435]
In this paper, we target the problem of generating effective ensembles of neural networks by encouraging diversity in prediction.
We explicitly optimize a diversity inducing adversarial loss for learning latent variables and thereby obtain diversity in the output predictions necessary for modeling multi-modal data.
Compared to the most competitive baselines, we show significant improvements in classification accuracy, under a shift in the data distribution.
arXiv Detail & Related papers (2020-03-10T03:10:41Z) - CryptoSPN: Privacy-preserving Sum-Product Network Inference [84.88362774693914]
We present a framework for privacy-preserving inference of sum-product networks (SPNs)
CryptoSPN achieves highly efficient and accurate inference in the order of seconds for medium-sized SPNs.
arXiv Detail & Related papers (2020-02-03T14:49:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.