Local Privacy-preserving Mechanisms and Applications in Machine Learning
- URL: http://arxiv.org/abs/2401.13692v1
- Date: Mon, 8 Jan 2024 22:29:00 GMT
- Title: Local Privacy-preserving Mechanisms and Applications in Machine Learning
- Authors: Likun Qin, Tianshuo Qiu,
- Abstract summary: Local Differential Privacy (LDP) provides strong privacy protection for individual users during the stages of data collection and processing.
One of the major applications of the privacy-preserving mechanisms is machine learning.
- Score: 0.21268495173320798
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The emergence and evolution of Local Differential Privacy (LDP) and its various adaptations play a pivotal role in tackling privacy issues related to the vast amounts of data generated by intelligent devices, which are crucial for data-informed decision-making in the realm of crowdsensing. Utilizing these extensive datasets can provide critical insights but also introduces substantial privacy concerns for the individuals involved. LDP, noted for its decentralized framework, excels in providing strong privacy protection for individual users during the stages of data collection and processing. The core principle of LDP lies in its technique of altering each user's data locally at the client end before it is sent to the server, thus preventing privacy violations at both stages. There are many LDP variances in the privacy research community aimed to improve the utility-privacy tradeoff. On the other hand, one of the major applications of the privacy-preserving mechanisms is machine learning. In this paper, we firstly delves into a comprehensive analysis of LDP and its variances, focusing on their various models, the diverse range of its adaptations, and the underlying structure of privacy mechanisms; then we discuss the state-of-art privacy mechanisms applications in machine learning.
Related papers
- Masked Differential Privacy [64.32494202656801]
We propose an effective approach called masked differential privacy (DP), which allows for controlling sensitive regions where differential privacy is applied.
Our method operates selectively on data and allows for defining non-sensitive-temporal regions without DP application or combining differential privacy with other privacy techniques within data samples.
arXiv Detail & Related papers (2024-10-22T15:22:53Z) - Mind the Privacy Unit! User-Level Differential Privacy for Language Model Fine-Tuning [62.224804688233]
differential privacy (DP) offers a promising solution by ensuring models are 'almost indistinguishable' with or without any particular privacy unit.
We study user-level DP motivated by applications where it necessary to ensure uniform privacy protection across users.
arXiv Detail & Related papers (2024-06-20T13:54:32Z) - A Learning-based Declarative Privacy-Preserving Framework for Federated Data Management [23.847568516724937]
We introduce a new privacy-preserving technique that uses a deep learning model trained using Differentially-Private Descent (DP-SGD) algorithm.
We then demonstrate a novel declarative privacy-preserving workflow that allows users to specify "what private information to protect" rather than "how to protect"
arXiv Detail & Related papers (2024-01-22T22:50:59Z) - Using Decentralized Aggregation for Federated Learning with Differential
Privacy [0.32985979395737774]
Federated Learning (FL) provides some level of privacy by retaining the data at the local node.
This research deploys an experimental environment for FL with Differential Privacy (DP) using benchmark datasets.
arXiv Detail & Related papers (2023-11-27T17:02:56Z) - Privacy Preserving Large Language Models: ChatGPT Case Study Based Vision and Framework [6.828884629694705]
This article proposes the conceptual model called PrivChatGPT, a privacy-generative model for LLMs.
PrivChatGPT consists of two main components i.e., preserving user privacy during the data curation/pre-processing together with preserving private context and the private training process for large-scale data.
arXiv Detail & Related papers (2023-10-19T06:55:13Z) - A Unified View of Differentially Private Deep Generative Modeling [60.72161965018005]
Data with privacy concerns comes with stringent regulations that frequently prohibited data access and data sharing.
Overcoming these obstacles is key for technological progress in many real-world application scenarios that involve privacy sensitive data.
Differentially private (DP) data publishing provides a compelling solution, where only a sanitized form of the data is publicly released.
arXiv Detail & Related papers (2023-09-27T14:38:16Z) - Position: Considerations for Differentially Private Learning with Large-Scale Public Pretraining [75.25943383604266]
We question whether the use of large Web-scraped datasets should be viewed as differential-privacy-preserving.
We caution that publicizing these models pretrained on Web data as "private" could lead to harm and erode the public's trust in differential privacy as a meaningful definition of privacy.
We conclude by discussing potential paths forward for the field of private learning, as public pretraining becomes more popular and powerful.
arXiv Detail & Related papers (2022-12-13T10:41:12Z) - Production of Categorical Data Verifying Differential Privacy:
Conception and Applications to Machine Learning [0.0]
Differential privacy is a formal definition that allows quantifying the privacy-utility trade-off.
With the local DP (LDP) model, users can sanitize their data locally before transmitting it to the server.
In all cases, we concluded that differentially private ML models achieve nearly the same utility metrics as non-private ones.
arXiv Detail & Related papers (2022-04-02T12:50:14Z) - PCAL: A Privacy-preserving Intelligent Credit Risk Modeling Framework
Based on Adversarial Learning [111.19576084222345]
This paper proposes a framework of Privacy-preserving Credit risk modeling based on Adversarial Learning (PCAL)
PCAL aims to mask the private information inside the original dataset, while maintaining the important utility information for the target prediction task performance.
Results indicate that PCAL can learn an effective, privacy-free representation from user data, providing a solid foundation towards privacy-preserving machine learning for credit risk analysis.
arXiv Detail & Related papers (2020-10-06T07:04:59Z) - Private Reinforcement Learning with PAC and Regret Guarantees [69.4202374491817]
We design privacy preserving exploration policies for episodic reinforcement learning (RL)
We first provide a meaningful privacy formulation using the notion of joint differential privacy (JDP)
We then develop a private optimism-based learning algorithm that simultaneously achieves strong PAC and regret bounds, and enjoys a JDP guarantee.
arXiv Detail & Related papers (2020-09-18T20:18:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.