Privacy for Rescue: A New Testimony Why Privacy is Vulnerable In Deep
Models
- URL: http://arxiv.org/abs/2001.00493v1
- Date: Tue, 31 Dec 2019 15:55:03 GMT
- Title: Privacy for Rescue: A New Testimony Why Privacy is Vulnerable In Deep
Models
- Authors: Ruiyuan Gao, Ming Dun, Hailong Yang, Zhongzhi Luan, Depei Qian
- Abstract summary: We present a formal definition of the privacy protection problem in the edge-cloud system running models.
We analyze the-state-of-the-art methods and point out the drawbacks of their methods.
We propose two new metrics that are more accurate to measure the effectiveness of privacy protection methods.
- Score: 6.902994369582068
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The huge computation demand of deep learning models and limited computation
resources on the edge devices calls for the cooperation between edge device and
cloud service by splitting the deep models into two halves. However,
transferring the intermediates results from the partial models between edge
device and cloud service makes the user privacy vulnerable since the attacker
can intercept the intermediate results and extract privacy information from
them. Existing research works rely on metrics that are either impractical or
insufficient to measure the effectiveness of privacy protection methods in the
above scenario, especially from the aspect of a single user. In this paper, we
first present a formal definition of the privacy protection problem in the
edge-cloud system running DNN models. Then, we analyze the-state-of-the-art
methods and point out the drawbacks of their methods, especially the evaluation
metrics such as the Mutual Information (MI). In addition, we perform several
experiments to demonstrate that although existing methods perform well under
MI, they are not effective enough to protect the privacy of a single user. To
address the drawbacks of the evaluation metrics, we propose two new metrics
that are more accurate to measure the effectiveness of privacy protection
methods. Finally, we highlight several potential research directions to
encourage future efforts addressing the privacy protection problem.
Related papers
- FT-PrivacyScore: Personalized Privacy Scoring Service for Machine Learning Participation [4.772368796656325]
In practice, controlled data access remains a mainstream method for protecting data privacy in many industrial and research environments.
We developed the demo prototype FT-PrivacyScore to show that it's possible to efficiently and quantitatively estimate the privacy risk of participating in a model fine-tuning task.
arXiv Detail & Related papers (2024-10-30T02:41:26Z) - Investigating Privacy Attacks in the Gray-Box Setting to Enhance Collaborative Learning Schemes [7.651569149118461]
We study privacy attacks in the gray-box setting, where the attacker has only limited access to the model.
We deploy SmartNNCrypt, a framework that tailors homomorphic encryption to protect the portions of the model posing higher privacy risks.
arXiv Detail & Related papers (2024-09-25T18:49:21Z) - Mind the Privacy Unit! User-Level Differential Privacy for Language Model Fine-Tuning [62.224804688233]
differential privacy (DP) offers a promising solution by ensuring models are 'almost indistinguishable' with or without any particular privacy unit.
We study user-level DP motivated by applications where it necessary to ensure uniform privacy protection across users.
arXiv Detail & Related papers (2024-06-20T13:54:32Z) - Privacy Backdoors: Enhancing Membership Inference through Poisoning Pre-trained Models [112.48136829374741]
In this paper, we unveil a new vulnerability: the privacy backdoor attack.
When a victim fine-tunes a backdoored model, their training data will be leaked at a significantly higher rate than if they had fine-tuned a typical model.
Our findings highlight a critical privacy concern within the machine learning community and call for a reevaluation of safety protocols in the use of open-source pre-trained models.
arXiv Detail & Related papers (2024-04-01T16:50:54Z) - Hide and Seek (HaS): A Lightweight Framework for Prompt Privacy
Protection [6.201275002179716]
We introduce the HaS framework, where "H(ide)" and "S(eek)" represent its two core processes: hiding private entities for anonymization and seeking private entities for de-anonymization.
To quantitatively assess HaS's privacy protection performance, we propose both black-box and white-box adversarial models.
arXiv Detail & Related papers (2023-09-06T14:54:11Z) - TeD-SPAD: Temporal Distinctiveness for Self-supervised
Privacy-preservation for video Anomaly Detection [59.04634695294402]
Video anomaly detection (VAD) without human monitoring is a complex computer vision task.
Privacy leakage in VAD allows models to pick up and amplify unnecessary biases related to people's personal information.
We propose TeD-SPAD, a privacy-aware video anomaly detection framework that destroys visual private information in a self-supervised manner.
arXiv Detail & Related papers (2023-08-21T22:42:55Z) - Theoretically Principled Federated Learning for Balancing Privacy and
Utility [61.03993520243198]
We propose a general learning framework for the protection mechanisms that protects privacy via distorting model parameters.
It can achieve personalized utility-privacy trade-off for each model parameter, on each client, at each communication round in federated learning.
arXiv Detail & Related papers (2023-05-24T13:44:02Z) - Privacy in Practice: Private COVID-19 Detection in X-Ray Images
(Extended Version) [3.750713193320627]
We create machine learning models that satisfy Differential Privacy (DP)
We evaluate the utility-privacy trade-off more extensively and over stricter privacy budgets.
Our results indicate that needed privacy levels might differ based on the task-dependent practical threat from MIAs.
arXiv Detail & Related papers (2022-11-21T13:22:29Z) - No Free Lunch in "Privacy for Free: How does Dataset Condensation Help
Privacy" [75.98836424725437]
New methods designed to preserve data privacy require careful scrutiny.
Failure to preserve privacy is hard to detect, and yet can lead to catastrophic results when a system implementing a privacy-preserving'' method is attacked.
arXiv Detail & Related papers (2022-09-29T17:50:23Z) - Robustness Threats of Differential Privacy [70.818129585404]
We experimentally demonstrate that networks, trained with differential privacy, in some settings might be even more vulnerable in comparison to non-private versions.
We study how the main ingredients of differentially private neural networks training, such as gradient clipping and noise addition, affect the robustness of the model.
arXiv Detail & Related papers (2020-12-14T18:59:24Z) - SPEED: Secure, PrivatE, and Efficient Deep learning [2.283665431721732]
We introduce a deep learning framework able to deal with strong privacy constraints.
Based on collaborative learning, differential privacy and homomorphic encryption, the proposed approach advances state-of-the-art.
arXiv Detail & Related papers (2020-06-16T19:31:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.