Robust and Verifiable MPC with Applications to Linear Machine Learning Inference
- URL: http://arxiv.org/abs/2506.00518v1
- Date: Sat, 31 May 2025 11:26:57 GMT
- Title: Robust and Verifiable MPC with Applications to Linear Machine Learning Inference
- Authors: Tzu-Shen Wang, Jimmy Dani, Juan Garay, Soamar Homsi, Nitesh Saxena,
- Abstract summary: We present an efficient multi-party computation protocol that provides strong security guarantees in settings with dishonest majority of participants.<n>With complete identifiability, honest parties can detect and unanimously agree on the identity of any malicious party.<n>We benchmark our protocol on a ML-as-a-service scenario, wherein clients off-load the desired computation to the servers, and verify the computation result.
- Score: 1.3612043566819643
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we present an efficient secure multi-party computation MPC protocol that provides strong security guarantees in settings with dishonest majority of participants who may behave arbitrarily. Unlike the popular MPC implementation known as SPDZ [Crypto '12], which only ensures security with abort, our protocol achieves both complete identifiability and robustness. With complete identifiability, honest parties can detect and unanimously agree on the identity of any malicious party. Robustness allows the protocol to continue with the computation without requiring a restart, even when malicious behavior is detected. Additionally, our approach addresses the performance limitations observed in the protocol by Cunningham et al. [ICITS '17], which, while achieving complete identifiability, is hindered by the costly exponentiation operations required by the choice of commitment scheme. Our protocol is based on the approach by Rivinius et al. [S&P '22], utilizing lattice-based commitment for better efficiency. We achieved robustness with the help of a semi-honest trusted third party. We benchmark our robust protocol, showing the efficient recovery from parties' malicious behavior. Finally, we benchmark our protocol on a ML-as-a-service scenario, wherein clients off-load the desired computation to the servers, and verify the computation result. We benchmark on linear ML inference, running on various datasets. While our efficiency is slightly lower compared to SPDZ's, we offer stronger security properties that provide distinct advantages.
Related papers
- BiCert: A Bilinear Mixed Integer Programming Formulation for Precise Certified Bounds Against Data Poisoning Attacks [62.897993591443594]
Data poisoning attacks pose one of the biggest threats to modern AI systems.<n>Data poisoning attacks pose one of the biggest threats to modern AI systems.<n>Data poisoning attacks pose one of the biggest threats to modern AI systems.
arXiv Detail & Related papers (2024-12-13T14:56:39Z) - The Communication-Friendly Privacy-Preserving Machine Learning against Malicious Adversaries [14.232901861974819]
Privacy-preserving machine learning (PPML) is an innovative approach that allows for secure data analysis while safeguarding sensitive information.
We introduce efficient protocol for secure linear function evaluation.
We extend the protocol to handle linear and non-linear layers, ensuring compatibility with a wide range of machine-learning models.
arXiv Detail & Related papers (2024-11-14T08:55:14Z) - A Survey and Comparative Analysis of Security Properties of CAN Authentication Protocols [92.81385447582882]
The Controller Area Network (CAN) bus leaves in-vehicle communications inherently non-secure.
This paper reviews and compares the 15 most prominent authentication protocols for the CAN bus.
We evaluate protocols based on essential operational criteria that contribute to ease of implementation.
arXiv Detail & Related papers (2024-01-19T14:52:04Z) - Bicoptor 2.0: Addressing Challenges in Probabilistic Truncation for Enhanced Privacy-Preserving Machine Learning [6.733212399517445]
This paper focuses on analyzing the problems and proposing solutions for the probabilistic truncation protocol in existing PPML works.
In terms of accuracy, we reveal that precision selections recommended in some of the existing works are incorrect.
We propose a solution and a precision selection guideline for future works.
arXiv Detail & Related papers (2023-09-10T01:43:40Z) - Robust and efficient verification of graph states in blind
measurement-based quantum computation [52.70359447203418]
Blind quantum computation (BQC) is a secure quantum computation method that protects the privacy of clients.
It is crucial to verify whether the resource graph states are accurately prepared in the adversarial scenario.
Here, we propose a robust and efficient protocol for verifying arbitrary graph states with any prime local dimension.
arXiv Detail & Related papers (2023-05-18T06:24:45Z) - Finite-Size Security for Discrete-Modulated Continuous-Variable Quantum
Key Distribution Protocols [4.58733012283457]
We present a composable finite-size security proof against independently and identically distributed collective attacks for a general DM CV-QKD protocol.
We extend and apply a numerical security proof technique to calculate tight lower bounds on the secure key rate.
Results show that our security proof method yields secure finitesize key rates under experimentally viable conditions up to at least 72km transmission distance.
arXiv Detail & Related papers (2023-01-20T17:16:21Z) - Is Vertical Logistic Regression Privacy-Preserving? A Comprehensive
Privacy Analysis and Beyond [57.10914865054868]
We consider vertical logistic regression (VLR) trained with mini-batch descent gradient.
We provide a comprehensive and rigorous privacy analysis of VLR in a class of open-source Federated Learning frameworks.
arXiv Detail & Related papers (2022-07-19T05:47:30Z) - Byzantine-Robust Federated Learning with Optimal Statistical Rates and
Privacy Guarantees [123.0401978870009]
We propose Byzantine-robust federated learning protocols with nearly optimal statistical rates.
We benchmark against competing protocols and show the empirical superiority of the proposed protocols.
Our protocols with bucketing can be naturally combined with privacy-guaranteeing procedures to introduce security against a semi-honest server.
arXiv Detail & Related papers (2022-05-24T04:03:07Z) - Data post-processing for the one-way heterodyne protocol under
composable finite-size security [62.997667081978825]
We study the performance of a practical continuous-variable (CV) quantum key distribution protocol.
We focus on the Gaussian-modulated coherent-state protocol with heterodyne detection in a high signal-to-noise ratio regime.
This allows us to study the performance for practical implementations of the protocol and optimize the parameters connected to the steps above.
arXiv Detail & Related papers (2022-05-20T12:37:09Z) - An Accurate, Scalable and Verifiable Protocol for Federated
Differentially Private Averaging [0.0]
We tackle challenges regarding the privacy guarantees provided to participants and the correctness of the computation in the presence of malicious parties.
Our first contribution is a scalable protocol in which participants exchange correlated Gaussian noise along the edges of a network graph.
Our second contribution enables users to prove the correctness of their computations without compromising the efficiency and privacy guarantees of the protocol.
arXiv Detail & Related papers (2020-06-12T14:21:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.