UN Handbook on Privacy-Preserving Computation Techniques
- URL: http://arxiv.org/abs/2301.06167v1
- Date: Sun, 15 Jan 2023 19:43:40 GMT
- Title: UN Handbook on Privacy-Preserving Computation Techniques
- Authors: David W. Archer, Borja de Balle Pigem, Dan Bogdanov, Mark Craddock,
Adria Gascon, Ronald Jansen, Matja\v{z} Jug, Kim Laine, Robert McLellan, Olga
Ohrimenko, Mariana Raykova, Andrew Trask, Simon Wardley
- Abstract summary: This paper describes privacy-preserving approaches for the statistical analysis of sensitive data.
The information in this document is intended for use by statisticians and data scientists, data curators and architects, IT specialists, and security and information assurance specialists.
- Score: 14.63213847614646
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: This paper describes privacy-preserving approaches for the statistical
analysis. It describes motivations for privacy-preserving approaches for the
statistical analysis of sensitive data, presents examples of use cases where
such methods may apply and describes relevant technical capabilities to assure
privacy preservation while still allowing analysis of sensitive data. Our focus
is on methods that enable protecting privacy of data while it is being
processed, not only while it is at rest on a system or in transit between
systems. The information in this document is intended for use by statisticians
and data scientists, data curators and architects, IT specialists, and security
and information assurance specialists, so we explicitly avoid cryptographic
technical details of the technologies we describe.
Related papers
- Differential Privacy Overview and Fundamental Techniques [63.0409690498569]
This chapter is meant to be part of the book "Differential Privacy in Artificial Intelligence: From Theory to Practice"
It starts by illustrating various attempts to protect data privacy, emphasizing where and why they failed.
It then defines the key actors, tasks, and scopes that make up the domain of privacy-preserving data analysis.
arXiv Detail & Related papers (2024-11-07T13:52:11Z) - Masked Differential Privacy [64.32494202656801]
We propose an effective approach called masked differential privacy (DP), which allows for controlling sensitive regions where differential privacy is applied.
Our method operates selectively on data and allows for defining non-sensitive-temporal regions without DP application or combining differential privacy with other privacy techniques within data samples.
arXiv Detail & Related papers (2024-10-22T15:22:53Z) - Balancing Innovation and Privacy: Data Security Strategies in Natural Language Processing Applications [3.380276187928269]
This research addresses privacy protection in Natural Language Processing (NLP) by introducing a novel algorithm based on differential privacy.
By introducing a differential privacy mechanism, our model ensures the accuracy and reliability of data analysis results while adding random noise.
The proposed algorithm's efficacy is demonstrated through performance metrics such as accuracy (0.89), precision (0.85), and recall (0.88)
arXiv Detail & Related papers (2024-10-11T06:05:10Z) - Collection, usage and privacy of mobility data in the enterprise and public administrations [55.2480439325792]
Security measures such as anonymization are needed to protect individuals' privacy.
Within our study, we conducted expert interviews to gain insights into practices in the field.
We survey privacy-enhancing methods in use, which generally do not comply with state-of-the-art standards of differential privacy.
arXiv Detail & Related papers (2024-07-04T08:29:27Z) - Synergizing Privacy and Utility in Data Analytics Through Advanced Information Theorization [2.28438857884398]
We introduce three sophisticated algorithms: a Noise-Infusion Technique tailored for high-dimensional image data, a Variational Autoencoder (VAE) for robust feature extraction and an Expectation Maximization (EM) approach optimized for structured data privacy.
Our methods significantly reduce mutual information between sensitive attributes and transformed data, thereby enhancing privacy.
The research contributes to the field by providing a flexible and effective strategy for deploying privacy-preserving algorithms across various data types.
arXiv Detail & Related papers (2024-04-24T22:58:42Z) - A Summary of Privacy-Preserving Data Publishing in the Local Setting [0.6749750044497732]
Statistical Disclosure Control aims to minimize the risk of exposing confidential information by de-identifying it.
We outline the current privacy-preserving techniques employed in microdata de-identification, delve into privacy measures tailored for various disclosure scenarios, and assess metrics for information loss and predictive performance.
arXiv Detail & Related papers (2023-12-19T04:23:23Z) - No Free Lunch in "Privacy for Free: How does Dataset Condensation Help
Privacy" [75.98836424725437]
New methods designed to preserve data privacy require careful scrutiny.
Failure to preserve privacy is hard to detect, and yet can lead to catastrophic results when a system implementing a privacy-preserving'' method is attacked.
arXiv Detail & Related papers (2022-09-29T17:50:23Z) - An Overview of Privacy-enhancing Technologies in Biometric Recognition [12.554656658516262]
This work provides an overview of concepts of privacy-enhancing technologies for biometrics in a unified framework.
Fundamental properties and limitations of existing approaches are discussed and related to data protection techniques and principles.
This paper is meant as a point of entry to the field of biometric data protection and is directed towards experienced researchers as well as non-experts.
arXiv Detail & Related papers (2022-06-21T15:21:29Z) - Semantics-Preserved Distortion for Personal Privacy Protection in Information Management [65.08939490413037]
This paper suggests a linguistically-grounded approach to distort texts while maintaining semantic integrity.
We present two distinct frameworks for semantic-preserving distortion: a generative approach and a substitutive approach.
We also explore privacy protection in a specific medical information management scenario, showing our method effectively limits sensitive data memorization.
arXiv Detail & Related papers (2022-01-04T04:01:05Z) - Distributed Machine Learning and the Semblance of Trust [66.1227776348216]
Federated Learning (FL) allows the data owner to maintain data governance and perform model training locally without having to share their data.
FL and related techniques are often described as privacy-preserving.
We explain why this term is not appropriate and outline the risks associated with over-reliance on protocols that were not designed with formal definitions of privacy in mind.
arXiv Detail & Related papers (2021-12-21T08:44:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.