Differentially Private Multi-Site Treatment Effect Estimation
- URL: http://arxiv.org/abs/2310.06237v1
- Date: Tue, 10 Oct 2023 01:21:01 GMT
- Title: Differentially Private Multi-Site Treatment Effect Estimation
- Authors: Tatsuki Koga, Kamalika Chaudhuri, David Page
- Abstract summary: Most patient data remains in silo in separate hospitals, preventing the design of data-driven healthcare AI systems.
We look at estimating the average treatment effect (ATE), an important task in causal inference for healthcare applications.
We address this through a class of per-site estimation algorithms that reports the ATE estimate and its variance as a quality measure.
- Score: 28.13660104055298
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Patient privacy is a major barrier to healthcare AI. For confidentiality
reasons, most patient data remains in silo in separate hospitals, preventing
the design of data-driven healthcare AI systems that need large volumes of
patient data to make effective decisions. A solution to this is collective
learning across multiple sites through federated learning with differential
privacy. However, literature in this space typically focuses on differentially
private statistical estimation and machine learning, which is different from
the causal inference-related problems that arise in healthcare. In this work,
we take a fresh look at federated learning with a focus on causal inference;
specifically, we look at estimating the average treatment effect (ATE), an
important task in causal inference for healthcare applications, and provide a
federated analytics approach to enable ATE estimation across multiple sites
along with differential privacy (DP) guarantees at each site. The main
challenge comes from site heterogeneity -- different sites have different
sample sizes and privacy budgets. We address this through a class of per-site
estimation algorithms that reports the ATE estimate and its variance as a
quality measure, and an aggregation algorithm on the server side that minimizes
the overall variance of the final ATE estimate. Our experiments on real and
synthetic data show that our method reliably aggregates private statistics
across sites and provides better privacy-utility tradeoff under site
heterogeneity than baselines.
Related papers
- Empirical Mean and Frequency Estimation Under Heterogeneous Privacy: A Worst-Case Analysis [5.755004576310333]
Differential Privacy (DP) is the current gold-standard for measuring privacy.
We consider the problems of empirical mean estimation for univariate data and frequency estimation for categorical data, subject to heterogeneous privacy constraints.
We prove some optimality results, under both PAC error and mean-squared error, for our proposed algorithms and demonstrate superior performance over other baseline techniques experimentally.
arXiv Detail & Related papers (2024-07-15T22:46:02Z) - A Unified View of Differentially Private Deep Generative Modeling [60.72161965018005]
Data with privacy concerns comes with stringent regulations that frequently prohibited data access and data sharing.
Overcoming these obstacles is key for technological progress in many real-world application scenarios that involve privacy sensitive data.
Differentially private (DP) data publishing provides a compelling solution, where only a sanitized form of the data is publicly released.
arXiv Detail & Related papers (2023-09-27T14:38:16Z) - Exploratory Analysis of Federated Learning Methods with Differential
Privacy on MIMIC-III [0.7349727826230862]
Federated learning methods offer the possibility of training machine learning models on privacy-sensitive data sets.
We present an evaluation of the impact of different federation and differential privacy techniques when training models on the open-source MIMIC-III dataset.
arXiv Detail & Related papers (2023-02-08T17:27:44Z) - Decentralized Distributed Learning with Privacy-Preserving Data
Synthesis [9.276097219140073]
In the medical field, multi-center collaborations are often sought to yield more generalizable findings by leveraging the heterogeneity of patient and clinical data.
Recent privacy regulations hinder the possibility to share data, and consequently, to come up with machine learning-based solutions that support diagnosis and prognosis.
We present a decentralized distributed method that integrates features from local nodes, providing models able to generalize across multiple datasets while maintaining privacy.
arXiv Detail & Related papers (2022-06-20T23:49:38Z) - Federated Offline Reinforcement Learning [55.326673977320574]
We propose a multi-site Markov decision process model that allows for both homogeneous and heterogeneous effects across sites.
We design the first federated policy optimization algorithm for offline RL with sample complexity.
We give a theoretical guarantee for the proposed algorithm, where the suboptimality for the learned policies is comparable to the rate as if data is not distributed.
arXiv Detail & Related papers (2022-06-11T18:03:26Z) - Differentially Private Estimation of Heterogeneous Causal Effects [9.355532300027727]
We introduce a general meta-algorithm for estimating conditional average treatment effects (CATE) with differential privacy guarantees.
Our meta-algorithm can work with simple, single-stage CATE estimators such as S-learner and more complex multi-stage estimators such as DR and R-learner.
arXiv Detail & Related papers (2022-02-22T17:21:18Z) - Practical Challenges in Differentially-Private Federated Survival
Analysis of Medical Data [57.19441629270029]
In this paper, we take advantage of the inherent properties of neural networks to federate the process of training of survival analysis models.
In the realistic setting of small medical datasets and only a few data centers, this noise makes it harder for the models to converge.
We propose DPFed-post which adds a post-processing stage to the private federated learning scheme.
arXiv Detail & Related papers (2022-02-08T10:03:24Z) - FLOP: Federated Learning on Medical Datasets using Partial Networks [84.54663831520853]
COVID-19 Disease due to the novel coronavirus has caused a shortage of medical resources.
Different data-driven deep learning models have been developed to mitigate the diagnosis of COVID-19.
The data itself is still scarce due to patient privacy concerns.
We propose a simple yet effective algorithm, named textbfFederated textbfL textbfon Medical datasets using textbfPartial Networks (FLOP)
arXiv Detail & Related papers (2021-02-10T01:56:58Z) - Robustness Threats of Differential Privacy [70.818129585404]
We experimentally demonstrate that networks, trained with differential privacy, in some settings might be even more vulnerable in comparison to non-private versions.
We study how the main ingredients of differentially private neural networks training, such as gradient clipping and noise addition, affect the robustness of the model.
arXiv Detail & Related papers (2020-12-14T18:59:24Z) - Predictive Modeling of ICU Healthcare-Associated Infections from
Imbalanced Data. Using Ensembles and a Clustering-Based Undersampling
Approach [55.41644538483948]
This work is focused on both the identification of risk factors and the prediction of healthcare-associated infections in intensive-care units.
The aim is to support decision making addressed at reducing the incidence rate of infections.
arXiv Detail & Related papers (2020-05-07T16:13:12Z) - Anonymizing Data for Privacy-Preserving Federated Learning [3.3673553810697827]
We propose the first syntactic approach for offering privacy in the context of federated learning.
Our approach aims to maximize utility or model performance, while supporting a defensible level of privacy.
We perform a comprehensive empirical evaluation on two important problems in the healthcare domain, using real-world electronic health data of 1 million patients.
arXiv Detail & Related papers (2020-02-21T02:30:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.