Differentially Private Sliced Wasserstein Distance
- URL: http://arxiv.org/abs/2107.01848v1
- Date: Mon, 5 Jul 2021 08:06:02 GMT
- Title: Differentially Private Sliced Wasserstein Distance
- Authors: Alain Rakotomamonjy (DocApp - LITIS), Liva Ralaivola
- Abstract summary: We take the perspective of computing the divergences between distributions under the Differential Privacy (DP) framework.
Instead of resorting to the popular gradient-based sanitization method for DP, we tackle the problem at its roots by focusing on the Sliced Wasserstein Distance.
- Score: 5.330240017302619
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Developing machine learning methods that are privacy preserving is today a
central topic of research, with huge practical impacts. Among the numerous ways
to address privacy-preserving learning, we here take the perspective of
computing the divergences between distributions under the Differential Privacy
(DP) framework -- being able to compute divergences between distributions is
pivotal for many machine learning problems, such as learning generative models
or domain adaptation problems. Instead of resorting to the popular
gradient-based sanitization method for DP, we tackle the problem at its roots
by focusing on the Sliced Wasserstein Distance and seamlessly making it
differentially private. Our main contribution is as follows: we analyze the
property of adding a Gaussian perturbation to the intrinsic randomized
mechanism of the Sliced Wasserstein Distance, and we establish the
sensitivityof the resulting differentially private mechanism. One of our
important findings is that this DP mechanism transforms the Sliced Wasserstein
distance into another distance, that we call the Smoothed Sliced Wasserstein
Distance. This new differentially private distribution distance can be plugged
into generative models and domain adaptation algorithms in a transparent way,
and we empirically show that it yields highly competitive performance compared
with gradient-based DP approaches from the literature, with almost no loss in
accuracy for the domain adaptation problems that we consider.
Related papers
- Differentially Private Gradient Flow based on the Sliced Wasserstein Distance [59.1056830438845]
We introduce a novel differentially private generative modeling approach based on a gradient flow in the space of probability measures.
Experiments show that our proposed model can generate higher-fidelity data at a low privacy budget.
arXiv Detail & Related papers (2023-12-13T15:47:30Z) - Initialization Matters: Privacy-Utility Analysis of Overparameterized
Neural Networks [72.51255282371805]
We prove a privacy bound for the KL divergence between model distributions on worst-case neighboring datasets.
We find that this KL privacy bound is largely determined by the expected squared gradient norm relative to model parameters during training.
arXiv Detail & Related papers (2023-10-31T16:13:22Z) - On Mitigating the Utility-Loss in Differentially Private Learning: A new
Perspective by a Geometrically Inspired Kernel Approach [2.4253452809863116]
This paper introduces a geometrically inspired kernel-based approach to mitigate the accuracy-loss issue in classification.
A representation of the affine hull of given data points is learned in Reproducing Kernel Hilbert Spaces (RKHS)
The effectiveness of the approach is demonstrated through experiments on MNIST dataset, Freiburg groceries dataset, and a real biomedical dataset.
arXiv Detail & Related papers (2023-04-03T18:52:01Z) - Local Graph-homomorphic Processing for Privatized Distributed Systems [57.14673504239551]
We show that the added noise does not affect the performance of the learned model.
This is a significant improvement to previous works on differential privacy for distributed algorithms.
arXiv Detail & Related papers (2022-10-26T10:00:14Z) - Gromov-Wasserstein Discrepancy with Local Differential Privacy for
Distributed Structural Graphs [7.4398547397969494]
We propose a privacy-preserving framework to analyze the GW discrepancy of node embedding learned locally from graph neural networks.
Our experiments show that, with strong privacy protections guaranteed by the $varilon$-LDP algorithm, the proposed framework not only preserves privacy in graph learning but also presents a noised structural metric under GW distance.
arXiv Detail & Related papers (2022-02-01T23:32:33Z) - An automatic differentiation system for the age of differential privacy [65.35244647521989]
Tritium is an automatic differentiation-based sensitivity analysis framework for differentially private (DP) machine learning (ML)
We introduce Tritium, an automatic differentiation-based sensitivity analysis framework for differentially private (DP) machine learning (ML)
arXiv Detail & Related papers (2021-09-22T08:07:42Z) - Sensitivity analysis in differentially private machine learning using
hybrid automatic differentiation [54.88777449903538]
We introduce a novel textithybrid automatic differentiation (AD) system for sensitivity analysis.
This enables modelling the sensitivity of arbitrary differentiable function compositions, such as the training of neural networks on private data.
Our approach can enable the principled reasoning about privacy loss in the setting of data processing.
arXiv Detail & Related papers (2021-07-09T07:19:23Z) - Distributional Sliced Embedding Discrepancy for Incomparable
Distributions [22.615156512223766]
Gromov-Wasserstein (GW) distance is a key tool for manifold learning and cross-domain learning.
We propose a novel approach for comparing two computation distributions, that hinges on the idea of distributional slicing, embeddings, and on computing the closed-form Wasserstein distance between the sliced distributions.
arXiv Detail & Related papers (2021-06-04T15:11:30Z) - Learning while Respecting Privacy and Robustness to Distributional
Uncertainties and Adversarial Data [66.78671826743884]
The distributionally robust optimization framework is considered for training a parametric model.
The objective is to endow the trained model with robustness against adversarially manipulated input data.
Proposed algorithms offer robustness with little overhead.
arXiv Detail & Related papers (2020-07-07T18:25:25Z) - Differentially Private Variational Autoencoders with Term-wise Gradient
Aggregation [12.880889651679094]
We study how to learn variational autoencoders with a variety of divergences under differential privacy constraints.
We propose term-wise DP-SGD that crafts randomized gradients in two different ways tailored to the compositions of the loss terms.
arXiv Detail & Related papers (2020-06-19T16:12:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.