Privacy-Preserving Adversarial Network (PPAN) for Continuous
non-Gaussian Attributes
- URL: http://arxiv.org/abs/2003.05362v1
- Date: Wed, 11 Mar 2020 15:29:35 GMT
- Title: Privacy-Preserving Adversarial Network (PPAN) for Continuous
non-Gaussian Attributes
- Authors: Mohammadhadi Shateri, Fabrice Labeau
- Abstract summary: A privacy-preserving adversarial network (PPAN) was recently proposed to address the issue of privacy in data sharing.
In this study, we evaluate the PPAN model for continuous non-Gaussian data where lower and upper bounds of the privacy-preserving problem are used.
- Score: 6.657723602564176
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A privacy-preserving adversarial network (PPAN) was recently proposed as an
information-theoretical framework to address the issue of privacy in data
sharing. The main idea of this model was using mutual information as the
privacy measure and adversarial training of two deep neural networks, one as
the mechanism and another as the adversary. The performance of the PPAN model
for the discrete synthetic data, MNIST handwritten digits, and continuous
Gaussian data was evaluated compared to the analytically optimal trade-off. In
this study, we evaluate the PPAN model for continuous non-Gaussian data where
lower and upper bounds of the privacy-preserving problem are used. These bounds
include the Kraskov (KSG) estimation of entropy and mutual information that is
based on k-th nearest neighbor. In addition to the synthetic data sets, a
practical case for hiding the actual electricity consumption from smart meter
readings is examined. The results show that for continuous non-Gaussian data,
the PPAN model performs within the determined optimal ranges and close to the
lower bound.
Related papers
- Pseudo-Probability Unlearning: Towards Efficient and Privacy-Preserving Machine Unlearning [59.29849532966454]
We propose PseudoProbability Unlearning (PPU), a novel method that enables models to forget data to adhere to privacy-preserving manner.
Our method achieves over 20% improvements in forgetting error compared to the state-of-the-art.
arXiv Detail & Related papers (2024-11-04T21:27:06Z) - Differentially Private Graph Diffusion with Applications in Personalized PageRanks [15.529891375174165]
This work proposes a novel graph diffusion framework with edge-level differential privacy guarantees by using noisy diffusion iterates.
The algorithm injects Laplace noise per diffusion and adopts a degree-based thresholding function to mitigate the high sensitivity induced by low-degree nodes.
Our privacy loss analysis is based on Privacy Amplification by Iteration (PABI), which to our best knowledge, is the first effort that analyzes PABI with Laplace noise.
arXiv Detail & Related papers (2024-06-22T15:32:53Z) - Initialization Matters: Privacy-Utility Analysis of Overparameterized
Neural Networks [72.51255282371805]
We prove a privacy bound for the KL divergence between model distributions on worst-case neighboring datasets.
We find that this KL privacy bound is largely determined by the expected squared gradient norm relative to model parameters during training.
arXiv Detail & Related papers (2023-10-31T16:13:22Z) - On the Inherent Privacy Properties of Discrete Denoising Diffusion Models [17.773335593043004]
We present the pioneering theoretical exploration of the privacy preservation inherent in discrete diffusion models.
Our framework elucidates the potential privacy leakage for each data point in a given training dataset.
Our bounds also show that training with $s$-sized data points leads to a surge in privacy leakage.
arXiv Detail & Related papers (2023-10-24T05:07:31Z) - Personalized Federated Learning under Mixture of Distributions [98.25444470990107]
We propose a novel approach to Personalized Federated Learning (PFL), which utilizes Gaussian mixture models (GMM) to fit the input data distributions across diverse clients.
FedGMM possesses an additional advantage of adapting to new clients with minimal overhead, and it also enables uncertainty quantification.
Empirical evaluations on synthetic and benchmark datasets demonstrate the superior performance of our method in both PFL classification and novel sample detection.
arXiv Detail & Related papers (2023-05-01T20:04:46Z) - Heterogeneous Randomized Response for Differential Privacy in Graph
Neural Networks [18.4005860362025]
Graph neural networks (GNNs) are susceptible to privacy inference attacks (PIAs)
We propose a novel mechanism to protect nodes' features and edges against PIAs under differential privacy (DP) guarantees.
We derive significantly better randomization probabilities and tighter error bounds at both levels of nodes' features and edges.
arXiv Detail & Related papers (2022-11-10T18:52:46Z) - Network Generation with Differential Privacy [4.297070083645049]
We consider the problem of generating private synthetic versions of real-world graphs containing private information.
We propose a generative model that can reproduce the properties of real-world networks while maintaining edge-differential privacy.
arXiv Detail & Related papers (2021-11-17T13:07:09Z) - RDP-GAN: A R\'enyi-Differential Privacy based Generative Adversarial
Network [75.81653258081435]
Generative adversarial network (GAN) has attracted increasing attention recently owing to its impressive ability to generate realistic samples with high privacy protection.
However, when GANs are applied on sensitive or private training examples, such as medical or financial records, it is still probable to divulge individuals' sensitive and private information.
We propose a R'enyi-differentially private-GAN (RDP-GAN), which achieves differential privacy (DP) in a GAN by carefully adding random noises on the value of the loss function during training.
arXiv Detail & Related papers (2020-07-04T09:51:02Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z) - PIVEN: A Deep Neural Network for Prediction Intervals with Specific
Value Prediction [14.635820704895034]
We present PIVEN, a deep neural network for producing both a PI and a value prediction.
Our approach makes no assumptions regarding data distribution within the PI, making its value prediction more effective for various real-world problems.
arXiv Detail & Related papers (2020-06-09T09:29:58Z) - Differentially Private Federated Learning with Laplacian Smoothing [72.85272874099644]
Federated learning aims to protect data privacy by collaboratively learning a model without sharing private data among users.
An adversary may still be able to infer the private training data by attacking the released model.
Differential privacy provides a statistical protection against such attacks at the price of significantly degrading the accuracy or utility of the trained models.
arXiv Detail & Related papers (2020-05-01T04:28:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.