Differentially Private Synthetic Control
- URL: http://arxiv.org/abs/2303.14084v1
- Date: Fri, 24 Mar 2023 15:49:29 GMT
- Title: Differentially Private Synthetic Control
- Authors: Saeyoung Rho, Rachel Cummings, Vishal Misra
- Abstract summary: We provide the first algorithms for differentially private synthetic control with explicit error bounds.
We show that our algorithms produce accurate predictions for the target unit, and that the cost of privacy is small.
- Score: 13.320917259299652
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Synthetic control is a causal inference tool used to estimate the treatment
effects of an intervention by creating synthetic counterfactual data. This
approach combines measurements from other similar observations (i.e., donor
pool ) to predict a counterfactual time series of interest (i.e., target unit)
by analyzing the relationship between the target and the donor pool before the
intervention. As synthetic control tools are increasingly applied to sensitive
or proprietary data, formal privacy protections are often required. In this
work, we provide the first algorithms for differentially private synthetic
control with explicit error bounds. Our approach builds upon tools from
non-private synthetic control and differentially private empirical risk
minimization. We provide upper and lower bounds on the sensitivity of the
synthetic control query and provide explicit error bounds on the accuracy of
our private synthetic control algorithms. We show that our algorithms produce
accurate predictions for the target unit, and that the cost of privacy is
small. Finally, we empirically evaluate the performance of our algorithm, and
show favorable performance in a variety of parameter regimes, as well as
providing guidance to practitioners for hyperparameter tuning.
Related papers
- Incentive-Aware Synthetic Control: Accurate Counterfactual Estimation
via Incentivized Exploration [43.59040957749326]
We shed light on a frequently overlooked but ubiquitous assumption made in synthetic control methods (SCMs) of "overlap"
We propose a framework which incentivizes units with different preferences to take interventions they would not normally consider.
We extend our results to the setting of synthetic interventions, where the goal is to produce counterfactual outcomes under all interventions, not just control.
arXiv Detail & Related papers (2023-12-26T19:25:11Z) - Differentially Private Linear Regression with Linked Data [3.9325957466009203]
Differential privacy, a mathematical notion from computer science, is a rising tool offering robust privacy guarantees.
Recent work focuses on developing differentially private versions of individual statistical and machine learning tasks.
We present two differentially private algorithms for linear regression with linked data.
arXiv Detail & Related papers (2023-08-01T21:00:19Z) - Theoretically Principled Federated Learning for Balancing Privacy and
Utility [61.03993520243198]
We propose a general learning framework for the protection mechanisms that protects privacy via distorting model parameters.
It can achieve personalized utility-privacy trade-off for each model parameter, on each client, at each communication round in federated learning.
arXiv Detail & Related papers (2023-05-24T13:44:02Z) - Privacy Induces Robustness: Information-Computation Gaps and Sparse Mean
Estimation [8.9598796481325]
We investigate the consequences of this observation for both algorithms and computational complexity across different statistical problems.
We establish an information-computation gap for private sparse mean estimation.
We also give evidence for privacy-induced information-computation gaps for several other statistics and learning problems.
arXiv Detail & Related papers (2022-11-01T20:03:41Z) - Differentially Private Stochastic Gradient Descent with Low-Noise [49.981789906200035]
Modern machine learning algorithms aim to extract fine-grained information from data to provide accurate predictions, which often conflicts with the goal of privacy protection.
This paper addresses the practical and theoretical importance of developing privacy-preserving machine learning algorithms that ensure good performance while preserving privacy.
arXiv Detail & Related papers (2022-09-09T08:54:13Z) - Brownian Noise Reduction: Maximizing Privacy Subject to Accuracy
Constraints [53.01656650117495]
There is a disconnect between how researchers and practitioners handle privacy-utility tradeoffs.
Brownian mechanism works by first adding Gaussian noise of high variance corresponding to the final point of a simulated Brownian motion.
We complement our Brownian mechanism with ReducedAboveThreshold, a generalization of the classical AboveThreshold algorithm.
arXiv Detail & Related papers (2022-06-15T01:43:37Z) - Decentralized Stochastic Optimization with Inherent Privacy Protection [103.62463469366557]
Decentralized optimization is the basic building block of modern collaborative machine learning, distributed estimation and control, and large-scale sensing.
Since involved data, privacy protection has become an increasingly pressing need in the implementation of decentralized optimization algorithms.
arXiv Detail & Related papers (2022-05-08T14:38:23Z) - Partial Identification with Noisy Covariates: A Robust Optimization
Approach [94.10051154390237]
Causal inference from observational datasets often relies on measuring and adjusting for covariates.
We show that this robust optimization approach can extend a wide range of causal adjustment methods to perform partial identification.
Across synthetic and real datasets, we find that this approach provides ATE bounds with a higher coverage probability than existing methods.
arXiv Detail & Related papers (2022-02-22T04:24:26Z) - An automatic differentiation system for the age of differential privacy [65.35244647521989]
Tritium is an automatic differentiation-based sensitivity analysis framework for differentially private (DP) machine learning (ML)
We introduce Tritium, an automatic differentiation-based sensitivity analysis framework for differentially private (DP) machine learning (ML)
arXiv Detail & Related papers (2021-09-22T08:07:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.