Differentially Private M-band Wavelet-Based Mechanisms in Machine
Learning Environments
- URL: http://arxiv.org/abs/2001.00012v2
- Date: Tue, 18 Feb 2020 20:28:25 GMT
- Title: Differentially Private M-band Wavelet-Based Mechanisms in Machine
Learning Environments
- Authors: Kenneth Choi and Tony Lee
- Abstract summary: We develop three privacy-preserving mechanisms with the discrete M-band wavelet transform that embed noise into data.
We show that our mechanisms successfully retain both differential privacy and learnability through statistical analysis in various machine learning environments.
- Score: 4.629162607975834
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the post-industrial world, data science and analytics have gained
paramount importance regarding digital data privacy. Improper methods of
establishing privacy for accessible datasets can compromise large amounts of
user data even if the adversary has a small amount of preliminary knowledge of
a user. Many researchers have been developing high-level privacy-preserving
mechanisms that also retain the statistical integrity of the data to apply to
machine learning. Recent developments of differential privacy, such as the
Laplace and Privelet mechanisms, drastically decrease the probability that an
adversary can distinguish the elements in a data set and thus extract user
information. In this paper, we develop three privacy-preserving mechanisms with
the discrete M-band wavelet transform that embed noise into data. The first two
methods (LS and LS+) add noise through a Laplace-Sigmoid distribution that
multiplies Laplace-distributed values with the sigmoid function, and the third
method utilizes pseudo-quantum steganography to embed noise into the data. We
then show that our mechanisms successfully retain both differential privacy and
learnability through statistical analysis in various machine learning
environments.
Related papers
- Leveraging Internal Representations of Model for Magnetic Image
Classification [0.13654846342364302]
This paper introduces a potentially groundbreaking paradigm for machine learning model training, specifically designed for scenarios with only a single magnetic image and its corresponding label image available.
We harness the capabilities of Deep Learning to generate concise yet informative samples, aiming to overcome data scarcity.
arXiv Detail & Related papers (2024-03-11T15:15:50Z) - FewFedPIT: Towards Privacy-preserving and Few-shot Federated Instruction Tuning [54.26614091429253]
Federated instruction tuning (FedIT) is a promising solution, by consolidating collaborative training across multiple data owners.
FedIT encounters limitations such as scarcity of instructional data and risk of exposure to training data extraction attacks.
We propose FewFedPIT, designed to simultaneously enhance privacy protection and model performance of federated few-shot learning.
arXiv Detail & Related papers (2024-03-10T08:41:22Z) - The Symmetric alpha-Stable Privacy Mechanism [0.0]
We present novel analysis of the Symmetric alpha-Stable (SaS) mechanism.
We prove that the mechanism is purely differentially private while remaining closed under convolution.
arXiv Detail & Related papers (2023-11-29T16:34:39Z) - A Unified View of Differentially Private Deep Generative Modeling [60.72161965018005]
Data with privacy concerns comes with stringent regulations that frequently prohibited data access and data sharing.
Overcoming these obstacles is key for technological progress in many real-world application scenarios that involve privacy sensitive data.
Differentially private (DP) data publishing provides a compelling solution, where only a sanitized form of the data is publicly released.
arXiv Detail & Related papers (2023-09-27T14:38:16Z) - Summary Statistic Privacy in Data Sharing [23.50797952699759]
We study a setting where a data holder wishes to share data with a receiver, without revealing certain summary statistics of the data distribution.
We propose summary statistic privacy, a metric for quantifying the privacy risk of such a mechanism.
We show that the proposed quantization mechanisms achieve better privacy-distortion tradeoffs than alternative privacy mechanisms.
arXiv Detail & Related papers (2023-03-03T15:29:19Z) - Breaking the Communication-Privacy-Accuracy Tradeoff with
$f$-Differential Privacy [51.11280118806893]
We consider a federated data analytics problem in which a server coordinates the collaborative data analysis of multiple users with privacy concerns and limited communication capability.
We study the local differential privacy guarantees of discrete-valued mechanisms with finite output space through the lens of $f$-differential privacy (DP)
More specifically, we advance the existing literature by deriving tight $f$-DP guarantees for a variety of discrete-valued mechanisms.
arXiv Detail & Related papers (2023-02-19T16:58:53Z) - DP2-Pub: Differentially Private High-Dimensional Data Publication with
Invariant Post Randomization [58.155151571362914]
We propose a differentially private high-dimensional data publication mechanism (DP2-Pub) that runs in two phases.
splitting attributes into several low-dimensional clusters with high intra-cluster cohesion and low inter-cluster coupling helps obtain a reasonable privacy budget.
We also extend our DP2-Pub mechanism to the scenario with a semi-honest server which satisfies local differential privacy.
arXiv Detail & Related papers (2022-08-24T17:52:43Z) - FedHarmony: Unlearning Scanner Bias with Distributed Data [2.371982686172067]
FedHarmony is a harmonisation framework operating in the federated learning paradigm.
We show that to remove the scanner-specific effects, we only need to share the mean and standard deviation of the learned features.
arXiv Detail & Related papers (2022-05-31T17:19:47Z) - Graph-Homomorphic Perturbations for Private Decentralized Learning [64.26238893241322]
Local exchange of estimates allows inference of data based on private data.
perturbations chosen independently at every agent, resulting in a significant performance loss.
We propose an alternative scheme, which constructs perturbations according to a particular nullspace condition, allowing them to be invisible.
arXiv Detail & Related papers (2020-10-23T10:35:35Z) - P3GM: Private High-Dimensional Data Release via Privacy Preserving
Phased Generative Model [23.91327154831855]
This paper proposes privacy-preserving phased generative model (P3GM) for releasing sensitive data.
P3GM employs the two-phase learning process to make it robust against the noise, and to increase learning efficiency.
Compared with the state-of-the-art methods, our generated samples look fewer noises and closer to the original data in terms of data diversity.
arXiv Detail & Related papers (2020-06-22T09:47:54Z) - Differentially Private Federated Learning with Laplacian Smoothing [72.85272874099644]
Federated learning aims to protect data privacy by collaboratively learning a model without sharing private data among users.
An adversary may still be able to infer the private training data by attacking the released model.
Differential privacy provides a statistical protection against such attacks at the price of significantly degrading the accuracy or utility of the trained models.
arXiv Detail & Related papers (2020-05-01T04:28:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.