PrivATE: Differentially Private Confidence Intervals for Average Treatment Effects
- URL: http://arxiv.org/abs/2505.21641v1
- Date: Tue, 27 May 2025 18:13:11 GMT
- Title: PrivATE: Differentially Private Confidence Intervals for Average Treatment Effects
- Authors: Maresa Schröder, Justin Hartenstein, Stefan Feuerriegel,
- Abstract summary: We present PrivATE, a machine learning framework for computing confidence intervals for the average treatment effect (ATE)<n>Specifically, we focus on deriving valid privacy-preserving CIs for the ATE from observational data.<n>Our framework is model agnostic, doubly robust, and ensures valid CIs.
- Score: 20.57872238271025
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The average treatment effect (ATE) is widely used to evaluate the effectiveness of drugs and other medical interventions. In safety-critical applications like medicine, reliable inferences about the ATE typically require valid uncertainty quantification, such as through confidence intervals (CIs). However, estimating treatment effects in these settings often involves sensitive data that must be kept private. In this work, we present PrivATE, a novel machine learning framework for computing CIs for the ATE under differential privacy. Specifically, we focus on deriving valid privacy-preserving CIs for the ATE from observational data. Our PrivATE framework consists of three steps: (i) estimating a differentially private ATE through output perturbation; (ii) estimating the differentially private variance through a truncated output perturbation mechanism; and (iii) constructing the CIs while accounting for the uncertainty from both the estimation and privatization steps. Our PrivATE framework is model agnostic, doubly robust, and ensures valid CIs. We demonstrate the effectiveness of our framework using synthetic and real-world medical datasets. To the best of our knowledge, we are the first to derive a general, doubly robust framework for valid CIs of the ATE under ($\varepsilon$, $\delta$)-differential privacy.
Related papers
- Model Agnostic Differentially Private Causal Inference [16.50501378936487]
Estimating causal effects from observational data is essential in medicine, economics and social sciences.<n>We propose a general, model-agnostic framework for differentially private estimation of average treatment effects.
arXiv Detail & Related papers (2025-05-26T07:00:37Z) - Differentially Private Learners for Heterogeneous Treatment Effects [23.05024957067819]
We present DP-CATE, a novel framework for CATE estimation that is Neyman-orthogonal and differentially private.<n>We demonstrate our DP-CATE across various experiments using synthetic and real-world datasets.
arXiv Detail & Related papers (2025-03-05T13:24:58Z) - Federated Experiment Design under Distributed Differential Privacy [31.06808163362162]
We focus on the rigorous protection of users' privacy while minimizing the trust toward service providers.
Although a vital component in modern A/B testing, private distributed experimentation has not previously been studied.
We show how these mechanisms can be scaled up to handle the very large number of participants commonly found in practice.
arXiv Detail & Related papers (2023-11-07T22:38:56Z) - Differentially Private Multi-Site Treatment Effect Estimation [28.13660104055298]
Most patient data remains in silo in separate hospitals, preventing the design of data-driven healthcare AI systems.
We look at estimating the average treatment effect (ATE), an important task in causal inference for healthcare applications.
We address this through a class of per-site estimation algorithms that reports the ATE estimate and its variance as a quality measure.
arXiv Detail & Related papers (2023-10-10T01:21:01Z) - A Randomized Approach for Tight Privacy Accounting [63.67296945525791]
We propose a new differential privacy paradigm called estimate-verify-release (EVR)
EVR paradigm first estimates the privacy parameter of a mechanism, then verifies whether it meets this guarantee, and finally releases the query output.
Our empirical evaluation shows the newly proposed EVR paradigm improves the utility-privacy tradeoff for privacy-preserving machine learning.
arXiv Detail & Related papers (2023-04-17T00:38:01Z) - Breaking the Communication-Privacy-Accuracy Tradeoff with
$f$-Differential Privacy [51.11280118806893]
We consider a federated data analytics problem in which a server coordinates the collaborative data analysis of multiple users with privacy concerns and limited communication capability.
We study the local differential privacy guarantees of discrete-valued mechanisms with finite output space through the lens of $f$-differential privacy (DP)
More specifically, we advance the existing literature by deriving tight $f$-DP guarantees for a variety of discrete-valued mechanisms.
arXiv Detail & Related papers (2023-02-19T16:58:53Z) - Tight Auditing of Differentially Private Machine Learning [77.38590306275877]
For private machine learning, existing auditing mechanisms are tight.
They only give tight estimates under implausible worst-case assumptions.
We design an improved auditing scheme that yields tight privacy estimates for natural (not adversarially crafted) datasets.
arXiv Detail & Related papers (2023-02-15T21:40:33Z) - Towards Reliable Medical Image Segmentation by utilizing Evidential Calibrated Uncertainty [52.03490691733464]
We introduce DEviS, an easily implementable foundational model that seamlessly integrates into various medical image segmentation networks.
By leveraging subjective logic theory, we explicitly model probability and uncertainty for the problem of medical image segmentation.
DeviS incorporates an uncertainty-aware filtering module, which utilizes the metric of uncertainty-calibrated error to filter reliable data.
arXiv Detail & Related papers (2023-01-01T05:02:46Z) - No Free Lunch in "Privacy for Free: How does Dataset Condensation Help
Privacy" [75.98836424725437]
New methods designed to preserve data privacy require careful scrutiny.
Failure to preserve privacy is hard to detect, and yet can lead to catastrophic results when a system implementing a privacy-preserving'' method is attacked.
arXiv Detail & Related papers (2022-09-29T17:50:23Z) - Differentially Private Estimation of Heterogeneous Causal Effects [9.355532300027727]
We introduce a general meta-algorithm for estimating conditional average treatment effects (CATE) with differential privacy guarantees.
Our meta-algorithm can work with simple, single-stage CATE estimators such as S-learner and more complex multi-stage estimators such as DR and R-learner.
arXiv Detail & Related papers (2022-02-22T17:21:18Z) - Graph-Homomorphic Perturbations for Private Decentralized Learning [64.26238893241322]
Local exchange of estimates allows inference of data based on private data.
perturbations chosen independently at every agent, resulting in a significant performance loss.
We propose an alternative scheme, which constructs perturbations according to a particular nullspace condition, allowing them to be invisible.
arXiv Detail & Related papers (2020-10-23T10:35:35Z) - On the role of surrogates in the efficient estimation of treatment effects with limited outcome data [43.17788100119767]
We study how incorporating data on units for which only surrogate outcomes not of primary interest are observed can increase the precision of ATE estimation.
We develop robust ATE estimation and inference methods that realize these efficiency gains.
arXiv Detail & Related papers (2020-03-27T13:31:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.