Private Synthetic Data Generation in Small Memory
- URL: http://arxiv.org/abs/2412.09756v1
- Date: Thu, 12 Dec 2024 23:24:05 GMT
- Title: Private Synthetic Data Generation in Small Memory
- Authors: Rayne Holland, Seyit Camtepe, Chandra Thapa, Jason Xue,
- Abstract summary: We propose a lightweight synthetic data generator that ensures differential privacy while being resource-efficient.
$textsfPrivHP$ generates private synthetic data that preserves the input stream's distribution.
It can process a dataset of size $n$ in $mathcalO((w+k)log (varepsilon n))$ space, $mathcalO(log (varepsilon n))$ update time, and outputs a private synthetic data generator in $mathcalO(klog klog (
- Score: 8.913413757749066
- License:
- Abstract: Protecting sensitive information on data streams is a critical challenge for modern systems. Current approaches to privacy in data streams follow two strategies. The first transforms the stream into a private sequence, enabling the use of non-private analyses but incurring high memory costs. The second uses compact data structures to create private summaries but restricts flexibility to predefined queries. To address these limitations, we propose $\textsf{PrivHP}$, a lightweight synthetic data generator that ensures differential privacy while being resource-efficient. $\textsf{PrivHP}$ generates private synthetic data that preserves the input stream's distribution, allowing flexible downstream analyses without additional privacy costs. It leverages a hierarchical decomposition of the domain, pruning low-frequency subdomains while preserving high-frequency ones in a privacy-preserving manner. To achieve memory efficiency in streaming contexts, $\textsf{PrivHP}$ uses private sketches to estimate subdomain frequencies without accessing the full dataset. $\textsf{PrivHP}$ is parameterized by a privacy budget $\varepsilon$, a pruning parameter $k$ and the sketch width $w$. It can process a dataset of size $n$ in $\mathcal{O}((w+k)\log (\varepsilon n))$ space, $\mathcal{O}(\log (\varepsilon n))$ update time, and outputs a private synthetic data generator in $\mathcal{O}(k\log k\log (\varepsilon n))$ time. Prior methods require $\Omega(n)$ space and construction time. Our evaluation uses the expected 1-Wasserstein distance between the sampler and the empirical distribution. Compared to state-of-the-art methods, we demonstrate that the additional cost in utility is inversely proportional to $k$ and $w$. This represents the first meaningful trade-off between performance and utility for private synthetic data generation.
Related papers
- Optimized Tradeoffs for Private Prediction with Majority Ensembling [59.99331405291337]
We introduce the Data-dependent Randomized Response Majority (DaRRM) algorithm.
DaRRM is parameterized by a data-dependent noise function $gamma$, and enables efficient utility optimization over the class of all private algorithms.
We show that DaRRM provably enjoys a privacy gain of a factor of 2 over common baselines, with fixed utility.
arXiv Detail & Related papers (2024-11-27T00:48:48Z) - Fast John Ellipsoid Computation with Differential Privacy Optimization [34.437362489150246]
We present the first differentially private algorithm for fast John ellipsoid computation.
Our method integrates noise perturbation with sketching and leverage score sampling to achieve both efficiency and privacy.
arXiv Detail & Related papers (2024-08-12T03:47:55Z) - Online Differentially Private Synthetic Data Generation [10.177542186664503]
We develop an online algorithm that generates a differentially private synthetic dataset at each time $t$.
This algorithm achieves a near-optimal accuracy bound of $O(log(t)t-1/d)$ for $dgeq 2$ and $O(log4.5(t)t-1)$ for $d=1$ in the 1-Wasserstein distance.
arXiv Detail & Related papers (2024-02-12T19:21:14Z) - Scaling Up Differentially Private LASSO Regularized Logistic Regression
via Faster Frank-Wolfe Iterations [51.14495595270775]
We adapt the Frank-Wolfe algorithm for $L_1$ penalized linear regression to be aware of sparse inputs and to use them effectively.
Our results demonstrate that this procedure can reduce runtime by a factor of up to $2,200times$, depending on the value of the privacy parameter $epsilon$ and the sparsity of the dataset.
arXiv Detail & Related papers (2023-10-30T19:52:43Z) - Differentially Private Clustering in Data Streams [65.78882209673885]
We present a differentially private streaming clustering framework which only requires an offline DP coreset or clustering algorithm as a blackbox.
Our framework is also differentially private under the continual release setting, i.e., the union of outputs of our algorithms at every timestamp is always differentially private.
arXiv Detail & Related papers (2023-07-14T16:11:22Z) - Smooth Anonymity for Sparse Graphs [69.1048938123063]
differential privacy has emerged as the gold standard of privacy, however, when it comes to sharing sparse datasets.
In this work, we consider a variation of $k$-anonymity, which we call smooth-$k$-anonymity, and design simple large-scale algorithms that efficiently provide smooth-$k$-anonymity.
arXiv Detail & Related papers (2022-07-13T17:09:25Z) - Frequency Estimation Under Multiparty Differential Privacy: One-shot and
Streaming [10.952006057356714]
We study the fundamental problem of frequency estimation under both privacy and communication constraints, where the data is distributed among $k$ parties.
We adopt the model of multiparty differential privacy (MDP), which is more general than local differential privacy (LDP) and (centralized) differential privacy.
Our protocols achieve optimality (up to logarithmic factors) permissible by the more stringent of the two constraints.
arXiv Detail & Related papers (2021-04-05T08:15:20Z) - Learning with User-Level Privacy [61.62978104304273]
We analyze algorithms to solve a range of learning tasks under user-level differential privacy constraints.
Rather than guaranteeing only the privacy of individual samples, user-level DP protects a user's entire contribution.
We derive an algorithm that privately answers a sequence of $K$ adaptively chosen queries with privacy cost proportional to $tau$, and apply it to solve the learning tasks we consider.
arXiv Detail & Related papers (2021-02-23T18:25:13Z) - Hiding Among the Clones: A Simple and Nearly Optimal Analysis of Privacy
Amplification by Shuffling [49.43288037509783]
We show that random shuffling amplifies differential privacy guarantees of locally randomized data.
Our result is based on a new approach that is simpler than previous work and extends to approximate differential privacy with nearly the same guarantees.
arXiv Detail & Related papers (2020-12-23T17:07:26Z) - BUDS: Balancing Utility and Differential Privacy by Shuffling [3.618133010429131]
Balancing utility and differential privacy by shuffling or textitBUDS is an approach towards crowd-sourced, statistical databases.
New algorithm is proposed using one-hot encoding and iterative shuffling with the loss estimation and risk minimization techniques.
During empirical test of balanced utility and privacy, BUDS produces $epsilon = 0.02$ which is a very promising result.
arXiv Detail & Related papers (2020-06-07T11:39:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.