More Than Privacy: Applying Differential Privacy in Key Areas of
Artificial Intelligence
- URL: http://arxiv.org/abs/2008.01916v1
- Date: Wed, 5 Aug 2020 03:07:36 GMT
- Title: More Than Privacy: Applying Differential Privacy in Key Areas of
Artificial Intelligence
- Authors: Tianqing Zhu and Dayong Ye and Wei Wang and Wanlei Zhou and Philip S.
Yu
- Abstract summary: We show that differential privacy can do more than just privacy preservation in AI.
It can also be used to improve security, stabilize learning, build fair models, and impose composition in selected areas of AI.
- Score: 62.3133247463974
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Artificial Intelligence (AI) has attracted a great deal of attention in
recent years. However, alongside all its advancements, problems have also
emerged, such as privacy violations, security issues and model fairness.
Differential privacy, as a promising mathematical model, has several attractive
properties that can help solve these problems, making it quite a valuable tool.
For this reason, differential privacy has been broadly applied in AI but to
date, no study has documented which differential privacy mechanisms can or have
been leveraged to overcome its issues or the properties that make this
possible. In this paper, we show that differential privacy can do more than
just privacy preservation. It can also be used to improve security, stabilize
learning, build fair models, and impose composition in selected areas of AI.
With a focus on regular machine learning, distributed machine learning, deep
learning, and multi-agent systems, the purpose of this article is to deliver a
new view on many possibilities for improving AI performance with differential
privacy techniques.
Related papers
- Masked Differential Privacy [64.32494202656801]
We propose an effective approach called masked differential privacy (DP), which allows for controlling sensitive regions where differential privacy is applied.
Our method operates selectively on data and allows for defining non-sensitive-temporal regions without DP application or combining differential privacy with other privacy techniques within data samples.
arXiv Detail & Related papers (2024-10-22T15:22:53Z) - A Survey on Machine Unlearning: Techniques and New Emerged Privacy Risks [42.3024294376025]
Machine unlearning is a research hotspot in the field of privacy protection.
Recent researchers have found potential privacy leakages of various of machine unlearning approaches.
We analyze privacy risks in various aspects, including definitions, implementation methods, and real-world applications.
arXiv Detail & Related papers (2024-06-10T11:31:04Z) - Privacy-Preserving Distributed Optimization and Learning [2.1271873498506038]
We discuss cryptography, differential privacy, and other techniques that can be used for privacy preservation.
We introduce several differential-privacy algorithms that can simultaneously ensure privacy and optimization accuracy.
We provide example applications in several machine learning problems to confirm the real-world effectiveness of these algorithms.
arXiv Detail & Related papers (2024-02-29T22:18:05Z) - Can LLMs Keep a Secret? Testing Privacy Implications of Language Models via Contextual Integrity Theory [82.7042006247124]
We show that even the most capable AI models reveal private information in contexts that humans would not, 39% and 57% of the time, respectively.
Our work underscores the immediate need to explore novel inference-time privacy-preserving approaches, based on reasoning and theory of mind.
arXiv Detail & Related papers (2023-10-27T04:15:30Z) - A Unified View of Differentially Private Deep Generative Modeling [60.72161965018005]
Data with privacy concerns comes with stringent regulations that frequently prohibited data access and data sharing.
Overcoming these obstacles is key for technological progress in many real-world application scenarios that involve privacy sensitive data.
Differentially private (DP) data publishing provides a compelling solution, where only a sanitized form of the data is publicly released.
arXiv Detail & Related papers (2023-09-27T14:38:16Z) - A Survey on Differential Privacy with Machine Learning and Future
Outlook [0.0]
differential privacy is used to protect machine learning models from any attacks and vulnerabilities.
This survey paper presents different differentially private machine learning algorithms categorized into two main categories.
arXiv Detail & Related papers (2022-11-19T14:20:53Z) - Algorithms with More Granular Differential Privacy Guarantees [65.3684804101664]
We consider partial differential privacy (DP), which allows quantifying the privacy guarantee on a per-attribute basis.
In this work, we study several basic data analysis and learning tasks, and design algorithms whose per-attribute privacy parameter is smaller that the best possible privacy parameter for the entire record of a person.
arXiv Detail & Related papers (2022-09-08T22:43:50Z) - Trustworthy AI [75.99046162669997]
Brittleness to minor adversarial changes in the input data, ability to explain the decisions, address the bias in their training data, are some of the most prominent limitations.
We propose the tutorial on Trustworthy AI to address six critical issues in enhancing user and public trust in AI systems.
arXiv Detail & Related papers (2020-11-02T20:04:18Z) - Tempered Sigmoid Activations for Deep Learning with Differential Privacy [33.574715000662316]
We show that the choice of activation function is central to bounding the sensitivity of privacy-preserving deep learning.
We achieve new state-of-the-art accuracy on MNIST, FashionMNIST, and CIFAR10 without any modification of the learning procedure fundamentals.
arXiv Detail & Related papers (2020-07-28T13:19:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.