PRICURE: Privacy-Preserving Collaborative Inference in a Multi-Party
Setting
- URL: http://arxiv.org/abs/2102.09751v1
- Date: Fri, 19 Feb 2021 05:55:53 GMT
- Title: PRICURE: Privacy-Preserving Collaborative Inference in a Multi-Party
Setting
- Authors: Ismat Jarin, Birhanu Eshete
- Abstract summary: This paper presents PRICURE, a system that combines complementary strengths of secure multi-party computation and differential privacy.
PRICURE enables privacy-preserving collaborative prediction among multiple model owners.
We evaluate PRICURE on neural networks across four datasets including benchmark medical image classification datasets.
- Score: 3.822543555265593
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: When multiple parties that deal with private data aim for a collaborative
prediction task such as medical image classification, they are often
constrained by data protection regulations and lack of trust among
collaborating parties. If done in a privacy-preserving manner, predictive
analytics can benefit from the collective prediction capability of multiple
parties holding complementary datasets on the same machine learning task. This
paper presents PRICURE, a system that combines complementary strengths of
secure multi-party computation (SMPC) and differential privacy (DP) to enable
privacy-preserving collaborative prediction among multiple model owners. SMPC
enables secret-sharing of private models and client inputs with non-colluding
secure servers to compute predictions without leaking model parameters and
inputs. DP masks true prediction results via noisy aggregation so as to deter a
semi-honest client who may mount membership inference attacks. We evaluate
PRICURE on neural networks across four datasets including benchmark medical
image classification datasets. Our results suggest PRICURE guarantees privacy
for tens of model owners and clients with acceptable accuracy loss. We also
show that DP reduces membership inference attack exposure without hurting
accuracy.
Related papers
- Pseudo-Probability Unlearning: Towards Efficient and Privacy-Preserving Machine Unlearning [59.29849532966454]
We propose PseudoProbability Unlearning (PPU), a novel method that enables models to forget data to adhere to privacy-preserving manner.
Our method achieves over 20% improvements in forgetting error compared to the state-of-the-art.
arXiv Detail & Related papers (2024-11-04T21:27:06Z) - Segmented Private Data Aggregation in the Multi-message Shuffle Model [6.436165623346879]
We pioneer the study of segmented private data aggregation within the multi-message shuffle model of differential privacy.
Our framework introduces flexible privacy protection for users and enhanced utility for the aggregation server.
Our framework achieves a reduction of about 50% in estimation error compared to existing approaches.
arXiv Detail & Related papers (2024-07-29T01:46:44Z) - Enhanced Privacy Bound for Shuffle Model with Personalized Privacy [32.08637708405314]
Differential Privacy (DP) is an enhanced privacy protocol which introduces an intermediate trusted server between local users and a central data curator.
It significantly amplifies the central DP guarantee by anonymizing and shuffling the local randomized data.
This work focuses on deriving the central privacy bound for a more practical setting where personalized local privacy is required by each user.
arXiv Detail & Related papers (2024-07-25T16:11:56Z) - Incentives in Private Collaborative Machine Learning [56.84263918489519]
Collaborative machine learning involves training models on data from multiple parties.
We introduce differential privacy (DP) as an incentive.
We empirically demonstrate the effectiveness and practicality of our approach on synthetic and real-world datasets.
arXiv Detail & Related papers (2024-04-02T06:28:22Z) - Privacy Amplification for the Gaussian Mechanism via Bounded Support [64.86780616066575]
Data-dependent privacy accounting frameworks such as per-instance differential privacy (pDP) and Fisher information loss (FIL) confer fine-grained privacy guarantees for individuals in a fixed training dataset.
We propose simple modifications of the Gaussian mechanism with bounded support, showing that they amplify privacy guarantees under data-dependent accounting.
arXiv Detail & Related papers (2024-03-07T21:22:07Z) - Federated Experiment Design under Distributed Differential Privacy [31.06808163362162]
We focus on the rigorous protection of users' privacy while minimizing the trust toward service providers.
Although a vital component in modern A/B testing, private distributed experimentation has not previously been studied.
We show how these mechanisms can be scaled up to handle the very large number of participants commonly found in practice.
arXiv Detail & Related papers (2023-11-07T22:38:56Z) - Privacy-Engineered Value Decomposition Networks for Cooperative
Multi-Agent Reinforcement Learning [19.504842607744457]
In cooperative multi-agent reinforcement learning, a team of agents must jointly optimize the team's long-term rewards to learn a designated task.
Privacy-Engineered Value Decomposition Networks (PE-VDN) models multi-agent coordination while safeguarding the confidentiality of the agents' environment interaction data.
We implement PE-VDN in StarCraft Multi-Agent Competition (SMAC) and show that it achieves 80% of Vanilla VDN's win rate.
arXiv Detail & Related papers (2023-09-13T02:50:57Z) - A Randomized Approach for Tight Privacy Accounting [63.67296945525791]
We propose a new differential privacy paradigm called estimate-verify-release (EVR)
EVR paradigm first estimates the privacy parameter of a mechanism, then verifies whether it meets this guarantee, and finally releases the query output.
Our empirical evaluation shows the newly proposed EVR paradigm improves the utility-privacy tradeoff for privacy-preserving machine learning.
arXiv Detail & Related papers (2023-04-17T00:38:01Z) - Mixed Differential Privacy in Computer Vision [133.68363478737058]
AdaMix is an adaptive differentially private algorithm for training deep neural network classifiers using both private and public image data.
A few-shot or even zero-shot learning baseline that ignores private data can outperform fine-tuning on a large private dataset.
arXiv Detail & Related papers (2022-03-22T06:15:43Z) - Collusion Resistant Federated Learning with Oblivious Distributed
Differential Privacy [4.951247283741297]
Privacy-preserving federated learning enables a population of distributed clients to jointly learn a shared model.
We present an efficient mechanism based on oblivious distributed differential privacy that is the first to protect against such client collusion.
We conclude with empirical analysis of the protocol's execution speed, learning accuracy, and privacy performance on two data sets.
arXiv Detail & Related papers (2022-02-20T19:52:53Z) - Private Prediction Sets [72.75711776601973]
Machine learning systems need reliable uncertainty quantification and protection of individuals' privacy.
We present a framework that treats these two desiderata jointly.
We evaluate the method on large-scale computer vision datasets.
arXiv Detail & Related papers (2021-02-11T18:59:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.