Learning With Differential Privacy
- URL: http://arxiv.org/abs/2006.05609v2
- Date: Thu, 11 Jun 2020 14:11:44 GMT
- Title: Learning With Differential Privacy
- Authors: Poushali Sengupta, Sudipta Paul, Subhankar Mishra
- Abstract summary: Differential privacy comes to the rescue with a proper promise of protection against leakage.
It uses a randomized response technique at the time of collection of the data which promises strong privacy with better utility.
- Score: 3.618133010429131
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The leakage of data might have been an extreme effect on the personal level
if it contains sensitive information. Common prevention methods like
encryption-decryption, endpoint protection, intrusion detection system are
prone to leakage. Differential privacy comes to the rescue with a proper
promise of protection against leakage, as it uses a randomized response
technique at the time of collection of the data which promises strong privacy
with better utility. Differential privacy allows one to access the forest of
data by describing their pattern of groups without disclosing any individual
trees. The current adaption of differential privacy by leading tech companies
and academia encourages authors to explore the topic in detail. The different
aspects of differential privacy, it's application in privacy protection and
leakage of information, a comparative discussion, on the current research
approaches in this field, its utility in the real world as well as the
trade-offs - will be discussed.
Related papers
- $\alpha$-Mutual Information: A Tunable Privacy Measure for Privacy
Protection in Data Sharing [4.475091558538915]
This paper adopts Arimoto's $alpha$-Mutual Information as a tunable privacy measure.
We formulate a general distortion-based mechanism that manipulates the original data to offer privacy protection.
arXiv Detail & Related papers (2023-10-27T16:26:14Z) - A Unified View of Differentially Private Deep Generative Modeling [60.72161965018005]
Data with privacy concerns comes with stringent regulations that frequently prohibited data access and data sharing.
Overcoming these obstacles is key for technological progress in many real-world application scenarios that involve privacy sensitive data.
Differentially private (DP) data publishing provides a compelling solution, where only a sanitized form of the data is publicly released.
arXiv Detail & Related papers (2023-09-27T14:38:16Z) - How Do Input Attributes Impact the Privacy Loss in Differential Privacy? [55.492422758737575]
We study the connection between the per-subject norm in DP neural networks and individual privacy loss.
We introduce a novel metric termed the Privacy Loss-Input Susceptibility (PLIS) which allows one to apportion the subject's privacy loss to their input attributes.
arXiv Detail & Related papers (2022-11-18T11:39:03Z) - Algorithms with More Granular Differential Privacy Guarantees [65.3684804101664]
We consider partial differential privacy (DP), which allows quantifying the privacy guarantee on a per-attribute basis.
In this work, we study several basic data analysis and learning tasks, and design algorithms whose per-attribute privacy parameter is smaller that the best possible privacy parameter for the entire record of a person.
arXiv Detail & Related papers (2022-09-08T22:43:50Z) - The Privacy Onion Effect: Memorization is Relative [76.46529413546725]
We show an Onion Effect of memorization: removing the "layer" of outlier points that are most vulnerable exposes a new layer of previously-safe points to the same attack.
It suggests that privacy-enhancing technologies such as machine unlearning could actually harm the privacy of other users.
arXiv Detail & Related papers (2022-06-21T15:25:56Z) - HyObscure: Hybrid Obscuring for Privacy-Preserving Data Publishing [7.554593344695387]
Minimizing privacy leakage while ensuring data utility is a critical problem to data holders in a privacy-preserving data publishing task.
Most prior research concerns only with one type of data and resorts to a single obscuring method.
This work takes a pilot study on privacy-preserving data publishing when both generalization and obfuscation operations are employed.
arXiv Detail & Related papers (2021-12-15T03:04:00Z) - "I need a better description'': An Investigation Into User Expectations
For Differential Privacy [31.352325485393074]
We explore users' privacy expectations related to differential privacy.
We find that users care about the kinds of information leaks against which differential privacy protects.
We find that the ways in which differential privacy is described in-the-wild haphazardly set users' privacy expectations.
arXiv Detail & Related papers (2021-10-13T02:36:37Z) - Swarm Differential Privacy for Purpose Driven
Data-Information-Knowledge-Wisdom Architecture [2.38142799291692]
We will explore the privacy protection of the broad Data-InformationKnowledge-Wisdom (DIKW) landscape.
As differential privacy proved to be an effective data privacy approach, we will look at it from a DIKW domain perspective.
Swarm Intelligence could effectively optimize and reduce the number of items in DIKW used in differential privacy.
arXiv Detail & Related papers (2021-05-09T23:09:07Z) - Privacy Information Classification: A Hybrid Approach [9.642559585173517]
This study proposes and develops a hybrid privacy classification approach to detect and classify privacy information from OSNs.
The proposed hybrid approach employs both deep learning models and ontology-based models for privacy-related information extraction.
arXiv Detail & Related papers (2021-01-27T18:03:18Z) - Robustness Threats of Differential Privacy [70.818129585404]
We experimentally demonstrate that networks, trained with differential privacy, in some settings might be even more vulnerable in comparison to non-private versions.
We study how the main ingredients of differentially private neural networks training, such as gradient clipping and noise addition, affect the robustness of the model.
arXiv Detail & Related papers (2020-12-14T18:59:24Z) - Private Reinforcement Learning with PAC and Regret Guarantees [69.4202374491817]
We design privacy preserving exploration policies for episodic reinforcement learning (RL)
We first provide a meaningful privacy formulation using the notion of joint differential privacy (JDP)
We then develop a private optimism-based learning algorithm that simultaneously achieves strong PAC and regret bounds, and enjoys a JDP guarantee.
arXiv Detail & Related papers (2020-09-18T20:18:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.