Privacy in Practice: Private COVID-19 Detection in X-Ray Images
(Extended Version)
- URL: http://arxiv.org/abs/2211.11434v4
- Date: Wed, 26 Apr 2023 08:49:55 GMT
- Title: Privacy in Practice: Private COVID-19 Detection in X-Ray Images
(Extended Version)
- Authors: Lucas Lange, Maja Schneider, Peter Christen, Erhard Rahm
- Abstract summary: We create machine learning models that satisfy Differential Privacy (DP)
We evaluate the utility-privacy trade-off more extensively and over stricter privacy budgets.
Our results indicate that needed privacy levels might differ based on the task-dependent practical threat from MIAs.
- Score: 3.750713193320627
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning (ML) can help fight pandemics like COVID-19 by enabling
rapid screening of large volumes of images. To perform data analysis while
maintaining patient privacy, we create ML models that satisfy Differential
Privacy (DP). Previous works exploring private COVID-19 models are in part
based on small datasets, provide weaker or unclear privacy guarantees, and do
not investigate practical privacy. We suggest improvements to address these
open gaps. We account for inherent class imbalances and evaluate the
utility-privacy trade-off more extensively and over stricter privacy budgets.
Our evaluation is supported by empirically estimating practical privacy through
black-box Membership Inference Attacks (MIAs). The introduced DP should help
limit leakage threats posed by MIAs, and our practical analysis is the first to
test this hypothesis on the COVID-19 classification task. Our results indicate
that needed privacy levels might differ based on the task-dependent practical
threat from MIAs. The results further suggest that with increasing DP
guarantees, empirical privacy leakage only improves marginally, and DP
therefore appears to have a limited impact on practical MIA defense. Our
findings identify possibilities for better utility-privacy trade-offs, and we
believe that empirical attack-specific privacy estimation can play a vital role
in tuning for practical privacy.
Related papers
- Mind the Privacy Unit! User-Level Differential Privacy for Language Model Fine-Tuning [62.224804688233]
differential privacy (DP) offers a promising solution by ensuring models are 'almost indistinguishable' with or without any particular privacy unit.
We study user-level DP motivated by applications where it necessary to ensure uniform privacy protection across users.
arXiv Detail & Related papers (2024-06-20T13:54:32Z) - Evaluations of Machine Learning Privacy Defenses are Misleading [25.007083740549845]
Empirical defenses for machine learning privacy forgo the provable guarantees of differential privacy.
We show that prior evaluations fail to characterize the privacy leakage of the most vulnerable samples.
arXiv Detail & Related papers (2024-04-26T13:21:30Z) - Reconciling AI Performance and Data Reconstruction Resilience for
Medical Imaging [52.578054703818125]
Artificial Intelligence (AI) models are vulnerable to information leakage of their training data, which can be highly sensitive.
Differential Privacy (DP) aims to circumvent these susceptibilities by setting a quantifiable privacy budget.
We show that using very large privacy budgets can render reconstruction attacks impossible, while drops in performance are negligible.
arXiv Detail & Related papers (2023-12-05T12:21:30Z) - PAC Privacy Preserving Diffusion Models [6.299952353968428]
Diffusion models can produce images with both high privacy and visual quality.
However, challenges arise such as in ensuring robust protection in privatizing specific data attributes.
We introduce the PAC Privacy Preserving Diffusion Model, a model leverages diffusion principles and ensure Probably Approximately Correct (PAC) privacy.
arXiv Detail & Related papers (2023-12-02T18:42:52Z) - A Randomized Approach for Tight Privacy Accounting [63.67296945525791]
We propose a new differential privacy paradigm called estimate-verify-release (EVR)
EVR paradigm first estimates the privacy parameter of a mechanism, then verifies whether it meets this guarantee, and finally releases the query output.
Our empirical evaluation shows the newly proposed EVR paradigm improves the utility-privacy tradeoff for privacy-preserving machine learning.
arXiv Detail & Related papers (2023-04-17T00:38:01Z) - Analyzing Privacy Leakage in Machine Learning via Multiple Hypothesis
Testing: A Lesson From Fano [83.5933307263932]
We study data reconstruction attacks for discrete data and analyze it under the framework of hypothesis testing.
We show that if the underlying private data takes values from a set of size $M$, then the target privacy parameter $epsilon$ can be $O(log M)$ before the adversary gains significant inferential power.
arXiv Detail & Related papers (2022-10-24T23:50:12Z) - On the Statistical Complexity of Estimation and Testing under Privacy Constraints [17.04261371990489]
We show how to characterize the power of a statistical test under differential privacy in a plug-and-play fashion.
We show that maintaining privacy results in a noticeable reduction in performance only when the level of privacy protection is very high.
Finally, we demonstrate that the DP-SGLD algorithm, a private convex solver, can be employed for maximum likelihood estimation with a high degree of confidence.
arXiv Detail & Related papers (2022-10-05T12:55:53Z) - No Free Lunch in "Privacy for Free: How does Dataset Condensation Help
Privacy" [75.98836424725437]
New methods designed to preserve data privacy require careful scrutiny.
Failure to preserve privacy is hard to detect, and yet can lead to catastrophic results when a system implementing a privacy-preserving'' method is attacked.
arXiv Detail & Related papers (2022-09-29T17:50:23Z) - Privacy-Preserving Distributed Expectation Maximization for Gaussian
Mixture Model using Subspace Perturbation [4.2698418800007865]
federated learning is motivated by the privacy concern as it does not allow to transmit private data but only intermediate updates.
We propose a fully decentralized privacy-preserving solution, which is able to securely compute the updates in each step.
Numerical validation shows that the proposed approach has superior performance compared to the existing approach in terms of both the accuracy and privacy level.
arXiv Detail & Related papers (2022-09-16T09:58:03Z) - A Critical Review on the Use (and Misuse) of Differential Privacy in
Machine Learning [5.769445676575767]
We review the use of differential privacy (DP) for privacy protection in machine learning (ML)
We show that, driven by the aim of preserving the accuracy of the learned models, DP-based ML implementations are so loose that they do not offer the ex ante privacy guarantees of DP.
arXiv Detail & Related papers (2022-06-09T17:13:10Z) - Private Reinforcement Learning with PAC and Regret Guarantees [69.4202374491817]
We design privacy preserving exploration policies for episodic reinforcement learning (RL)
We first provide a meaningful privacy formulation using the notion of joint differential privacy (JDP)
We then develop a private optimism-based learning algorithm that simultaneously achieves strong PAC and regret bounds, and enjoys a JDP guarantee.
arXiv Detail & Related papers (2020-09-18T20:18:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.