Security and Privacy Preserving Deep Learning
- URL: http://arxiv.org/abs/2006.12698v2
- Date: Mon, 29 Jun 2020 09:34:12 GMT
- Title: Security and Privacy Preserving Deep Learning
- Authors: Saichethan Miriyala Reddy and Saisree Miriyala
- Abstract summary: Massive data collection required for deep learning presents obvious privacy issues.
Users personal, highly sensitive data such as photos and voice recordings are kept indefinitely by the companies that collect it.
Deep neural networks are susceptible to various inference attacks as they remember information about their training data.
- Score: 2.322461721824713
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Commercial companies that collect user data on a large scale have been the
main beneficiaries of this trend since the success of deep learning techniques
is directly proportional to the amount of data available for training. Massive
data collection required for deep learning presents obvious privacy issues.
Users personal, highly sensitive data such as photos and voice recordings are
kept indefinitely by the companies that collect it. Users can neither delete it
nor restrict the purposes for which it is used. So, data privacy has been a
very important concern for governments and companies these days. It gives rise
to a very interesting challenge since on the one hand, we are pushing further
and further for high-quality models and accessible data, but on the other hand,
we need to keep data safe from both intentional and accidental leakage. The
more personal the data is it is more restricted it means some of the most
important social issues cannot be addressed using machine learning because
researchers do not have access to proper training data. But by learning how to
machine learning that protects privacy we can make a huge difference in solving
many social issues like curing disease etc. Deep neural networks are
susceptible to various inference attacks as they remember information about
their training data. In this chapter, we introduce differential privacy, which
ensures that different kinds of statistical analyses dont compromise privacy
and federated learning, training a machine learning model on a data to which we
do not have access to.
Related papers
- FT-PrivacyScore: Personalized Privacy Scoring Service for Machine Learning Participation [4.772368796656325]
In practice, controlled data access remains a mainstream method for protecting data privacy in many industrial and research environments.
We developed the demo prototype FT-PrivacyScore to show that it's possible to efficiently and quantitatively estimate the privacy risk of participating in a model fine-tuning task.
arXiv Detail & Related papers (2024-10-30T02:41:26Z) - Federated Learning Privacy: Attacks, Defenses, Applications, and Policy Landscape - A Survey [27.859861825159342]
Deep learning has shown incredible potential across a vast array of tasks.
Recent concerns on privacy have further highlighted challenges for accessing such data.
Federated learning has emerged as an important privacy-preserving technology.
arXiv Detail & Related papers (2024-05-06T16:55:20Z) - A Unified View of Differentially Private Deep Generative Modeling [60.72161965018005]
Data with privacy concerns comes with stringent regulations that frequently prohibited data access and data sharing.
Overcoming these obstacles is key for technological progress in many real-world application scenarios that involve privacy sensitive data.
Differentially private (DP) data publishing provides a compelling solution, where only a sanitized form of the data is publicly released.
arXiv Detail & Related papers (2023-09-27T14:38:16Z) - Privacy-Preserving Graph Machine Learning from Data to Computation: A
Survey [67.7834898542701]
We focus on reviewing privacy-preserving techniques of graph machine learning.
We first review methods for generating privacy-preserving graph data.
Then we describe methods for transmitting privacy-preserved information.
arXiv Detail & Related papers (2023-07-10T04:30:23Z) - Protecting User Privacy in Online Settings via Supervised Learning [69.38374877559423]
We design an intelligent approach to online privacy protection that leverages supervised learning.
By detecting and blocking data collection that might infringe on a user's privacy, we can restore a degree of digital privacy to the user.
arXiv Detail & Related papers (2023-04-06T05:20:16Z) - Position: Considerations for Differentially Private Learning with Large-Scale Public Pretraining [75.25943383604266]
We question whether the use of large Web-scraped datasets should be viewed as differential-privacy-preserving.
We caution that publicizing these models pretrained on Web data as "private" could lead to harm and erode the public's trust in differential privacy as a meaningful definition of privacy.
We conclude by discussing potential paths forward for the field of private learning, as public pretraining becomes more popular and powerful.
arXiv Detail & Related papers (2022-12-13T10:41:12Z) - Certified Data Removal in Sum-Product Networks [78.27542864367821]
Deleting the collected data is often insufficient to guarantee data privacy.
UnlearnSPN is an algorithm that removes the influence of single data points from a trained sum-product network.
arXiv Detail & Related papers (2022-10-04T08:22:37Z) - The Privacy Onion Effect: Memorization is Relative [76.46529413546725]
We show an Onion Effect of memorization: removing the "layer" of outlier points that are most vulnerable exposes a new layer of previously-safe points to the same attack.
It suggests that privacy-enhancing technologies such as machine unlearning could actually harm the privacy of other users.
arXiv Detail & Related papers (2022-06-21T15:25:56Z) - Privacy in Deep Learning: A Survey [16.278779275923448]
The ever-growing advances of deep learning in many areas have led to the adoption of Deep Neural Networks (DNNs) in production systems.
The availability of large datasets and high computational power are the main contributors to these advances.
This poses serious privacy concerns as this data can be misused or leaked through various vulnerabilities.
arXiv Detail & Related papers (2020-04-25T23:47:25Z) - A Review of Privacy-preserving Federated Learning for the
Internet-of-Things [3.3517146652431378]
This work reviews federated learning as an approach for performing machine learning on distributed data.
We aim to protect the privacy of user-generated data as well as reducing communication costs associated with data transfer.
We identify the strengths and weaknesses of different methods applied to federated learning.
arXiv Detail & Related papers (2020-04-24T15:27:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.