Holding Secrets Accountable: Auditing Privacy-Preserving Machine Learning
- URL: http://arxiv.org/abs/2402.15780v2
- Date: Mon, 23 Sep 2024 12:04:43 GMT
- Title: Holding Secrets Accountable: Auditing Privacy-Preserving Machine Learning
- Authors: Hidde Lycklama, Alexander Viand, Nicolas Küchler, Christian Knabenhans, Anwar Hithnawi,
- Abstract summary: We introduce Arc, an MPC framework for auditing privacy-preserving machine learning.
At the core of our framework is a new protocol for efficiently verifying MPC inputs against succinct commitments at scale.
We evaluate the performance of our framework when instantiated with our consistency protocol and compare it to hashing-based and homomorphic-commitment-based approaches.
- Score: 42.16294591171022
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advancements in privacy-preserving machine learning are paving the way to extend the benefits of ML to highly sensitive data that, until now, have been hard to utilize due to privacy concerns and regulatory constraints. Simultaneously, there is a growing emphasis on enhancing the transparency and accountability of machine learning, including the ability to audit ML deployments. While ML auditing and PPML have both been the subjects of intensive research, they have predominately been examined in isolation. However, their combination is becoming increasingly important. In this work, we introduce Arc, an MPC framework for auditing privacy-preserving machine learning. At the core of our framework is a new protocol for efficiently verifying MPC inputs against succinct commitments at scale. We evaluate the performance of our framework when instantiated with our consistency protocol and compare it to hashing-based and homomorphic-commitment-based approaches, demonstrating that it is up to 10^4x faster and up to 10^6x more concise.
Related papers
- Towards Efficient Privacy-Preserving Machine Learning: A Systematic Review from Protocol, Model, and System Perspectives [11.859194469912083]
Privacy-preserving machine learning (PPML) based on cryptographic protocols has emerged as a promising paradigm to protect user data privacy in cloud-based machine learning services.<n>PPML often incurs significant efficiency and scalability costs due to orders of magnitude overhead compared to the counterpart.
arXiv Detail & Related papers (2025-07-19T07:45:39Z) - Urania: Differentially Private Insights into AI Use [104.7449031243196]
$Urania$ provides end-to-end privacy protection by leveraging DP tools such as clustering, partition selection, and histogram-based summarization.<n>Results show the framework's ability to extract meaningful conversational insights while maintaining stringent user privacy.
arXiv Detail & Related papers (2025-06-05T07:00:31Z) - Does Machine Unlearning Truly Remove Model Knowledge? A Framework for Auditing Unlearning in LLMs [58.24692529185971]
We introduce a comprehensive auditing framework for unlearning evaluation comprising three benchmark datasets, six unlearning algorithms, and five prompt-based auditing methods.<n>We evaluate the effectiveness and robustness of different unlearning strategies.
arXiv Detail & Related papers (2025-05-29T09:19:07Z) - A Survey On Secure Machine Learning [0.0]
Recent advances in secure multiparty computation (MPC) have significantly improved its applicability in the realm of machine learning (ML)<n>This review explores key contributions that leverage MPC to enable multiple parties to engage in ML tasks without compromising the privacy of their data.<n> Innovations such as specialized software frameworks and domain-specific languages streamline the adoption of MPC in ML.
arXiv Detail & Related papers (2025-05-21T05:19:45Z) - Communication-Efficient and Privacy-Adaptable Mechanism for Federated Learning [54.20871516148981]
We introduce the Communication-Efficient and Privacy-Adaptable Mechanism (CEPAM)<n>CEPAM achieves communication efficiency and privacy protection simultaneously.<n>We theoretically analyze the privacy guarantee of CEPAM and investigate the trade-offs among user privacy and accuracy of CEPAM.
arXiv Detail & Related papers (2025-01-21T11:16:05Z) - The Communication-Friendly Privacy-Preserving Machine Learning against Malicious Adversaries [14.232901861974819]
Privacy-preserving machine learning (PPML) is an innovative approach that allows for secure data analysis while safeguarding sensitive information.
We introduce efficient protocol for secure linear function evaluation.
We extend the protocol to handle linear and non-linear layers, ensuring compatibility with a wide range of machine-learning models.
arXiv Detail & Related papers (2024-11-14T08:55:14Z) - A Pervasive, Efficient and Private Future: Realizing Privacy-Preserving Machine Learning Through Hybrid Homomorphic Encryption [2.434439232485276]
Privacy-Preserving Machine Learning (PPML) methods have been proposed to mitigate the privacy and security risks of ML models.
Modern encryption scheme that combines symmetric cryptography with HE has been introduced to overcome these challenges.
This work introduces HHE to the ML field by proposing resource-friendly PPML protocols for edge devices.
arXiv Detail & Related papers (2024-09-10T11:04:14Z) - Bridging Privacy and Robustness for Trustworthy Machine Learning [6.318638597489423]
Machine learning systems require inherent robustness against data perturbations and adversarial manipulations.<n>This paper systematically investigates the intricate theoretical relationships among Local Differential Privacy (LDP) and Maximum Bayesian Privacy (MBP)<n>We bridge these privacy concepts with algorithmic robustness, particularly within the Probably Approximately Correct (PAC) learning framework.
arXiv Detail & Related papers (2024-03-25T10:06:45Z) - Wildest Dreams: Reproducible Research in Privacy-preserving Neural
Network Training [2.853180143237022]
This work focuses on the ML model's training phase, where maintaining user data privacy is of utmost importance.
We provide a solid theoretical background that eases the understanding of current approaches.
We reproduce results for some of the papers and examine at what level existing works in the field provide support for open science.
arXiv Detail & Related papers (2024-03-06T10:25:36Z) - GuardML: Efficient Privacy-Preserving Machine Learning Services Through
Hybrid Homomorphic Encryption [2.611778281107039]
Privacy-Preserving Machine Learning (PPML) methods have been introduced to safeguard the privacy and security of Machine Learning models.
Modern cryptographic scheme, Hybrid Homomorphic Encryption (HHE) has recently emerged.
We develop and evaluate an HHE-based PPML application for classifying heart disease based on sensitive ECG data.
arXiv Detail & Related papers (2024-01-26T13:12:52Z) - A Randomized Approach for Tight Privacy Accounting [63.67296945525791]
We propose a new differential privacy paradigm called estimate-verify-release (EVR)
EVR paradigm first estimates the privacy parameter of a mechanism, then verifies whether it meets this guarantee, and finally releases the query output.
Our empirical evaluation shows the newly proposed EVR paradigm improves the utility-privacy tradeoff for privacy-preserving machine learning.
arXiv Detail & Related papers (2023-04-17T00:38:01Z) - Tight Auditing of Differentially Private Machine Learning [77.38590306275877]
For private machine learning, existing auditing mechanisms are tight.
They only give tight estimates under implausible worst-case assumptions.
We design an improved auditing scheme that yields tight privacy estimates for natural (not adversarially crafted) datasets.
arXiv Detail & Related papers (2023-02-15T21:40:33Z) - Is Vertical Logistic Regression Privacy-Preserving? A Comprehensive
Privacy Analysis and Beyond [57.10914865054868]
We consider vertical logistic regression (VLR) trained with mini-batch descent gradient.
We provide a comprehensive and rigorous privacy analysis of VLR in a class of open-source Federated Learning frameworks.
arXiv Detail & Related papers (2022-07-19T05:47:30Z) - An automatic differentiation system for the age of differential privacy [65.35244647521989]
Tritium is an automatic differentiation-based sensitivity analysis framework for differentially private (DP) machine learning (ML)
We introduce Tritium, an automatic differentiation-based sensitivity analysis framework for differentially private (DP) machine learning (ML)
arXiv Detail & Related papers (2021-09-22T08:07:42Z) - Antipodes of Label Differential Privacy: PATE and ALIBI [2.2761657094500682]
We consider the privacy-preserving machine learning (ML) setting where the trained model must satisfy differential privacy (DP)
We propose two novel approaches based on, respectively, the Laplace mechanism and the PATE framework.
We show how to achieve very strong privacy levels in some regimes, with our adaptation of the PATE framework.
arXiv Detail & Related papers (2021-06-07T08:14:32Z) - Machine Learning Force Fields [54.48599172620472]
Machine Learning (ML) has enabled numerous advances in computational chemistry.
One of the most promising applications is the construction of ML-based force fields (FFs)
This review gives an overview of applications of ML-FFs and the chemical insights that can be obtained from them.
arXiv Detail & Related papers (2020-10-14T13:14:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.