Differentially Private Federated Learning: Servers Trustworthiness, Estimation, and Statistical Inference
- URL: http://arxiv.org/abs/2404.16287v1
- Date: Thu, 25 Apr 2024 02:14:07 GMT
- Title: Differentially Private Federated Learning: Servers Trustworthiness, Estimation, and Statistical Inference
- Authors: Zhe Zhang, Ryumei Nakada, Linjun Zhang,
- Abstract summary: This paper investigates the challenges of high-dimensional estimation and inference under the constraints of differential privacy.
We introduce a novel federated estimation algorithm tailored for linear regression models.
We also propose methods for statistical inference, including coordinate-wise confidence intervals for individual parameters.
- Score: 18.97060758177909
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Differentially private federated learning is crucial for maintaining privacy in distributed environments. This paper investigates the challenges of high-dimensional estimation and inference under the constraints of differential privacy. First, we study scenarios involving an untrusted central server, demonstrating the inherent difficulties of accurate estimation in high-dimensional problems. Our findings indicate that the tight minimax rates depends on the high-dimensionality of the data even with sparsity assumptions. Second, we consider a scenario with a trusted central server and introduce a novel federated estimation algorithm tailored for linear regression models. This algorithm effectively handles the slight variations among models distributed across different machines. We also propose methods for statistical inference, including coordinate-wise confidence intervals for individual parameters and strategies for simultaneous inference. Extensive simulation experiments support our theoretical advances, underscoring the efficacy and reliability of our approaches.
Related papers
- Distributed, communication-efficient, and differentially private estimation of KL divergence [15.294136011320433]
Key task in managing distributed, sensitive data is to measure the extent to which a distribution changes.
We describe novel algorithmic approaches for estimating the KL divergence of data across federated models of computation, under differential privacy.
arXiv Detail & Related papers (2024-11-25T15:20:40Z) - Optimal Federated Learning for Nonparametric Regression with Heterogeneous Distributed Differential Privacy Constraints [5.3595271893779906]
We study federated learning for nonparametric regression in the context of distributed samples across different servers.
Findings shed light on the tradeoff between statistical accuracy and privacy preservation.
arXiv Detail & Related papers (2024-06-10T19:34:07Z) - The Risk of Federated Learning to Skew Fine-Tuning Features and
Underperform Out-of-Distribution Robustness [50.52507648690234]
Federated learning has the risk of skewing fine-tuning features and compromising the robustness of the model.
We introduce three robustness indicators and conduct experiments across diverse robust datasets.
Our approach markedly enhances the robustness across diverse scenarios, encompassing various parameter-efficient fine-tuning methods.
arXiv Detail & Related papers (2024-01-25T09:18:51Z) - Communication-Efficient Distributed Estimation and Inference for Cox's Model [4.731404257629232]
We develop communication-efficient iterative distributed algorithms for estimation and inference in the high-dimensional sparse Cox proportional hazards model.
To construct confidence intervals for linear combinations of high-dimensional hazard regression coefficients, we introduce a novel debiased method.
We provide valid and powerful distributed hypothesis tests for any coordinate element based on a decorrelated score test.
arXiv Detail & Related papers (2023-02-23T15:50:17Z) - Differentially private partitioned variational inference [28.96767727430277]
Learning a privacy-preserving model from sensitive data which are distributed across multiple devices is an increasingly important problem.
We present differentially private partitioned variational inference, the first general framework for learning a variational approximation to a Bayesian posterior distribution.
arXiv Detail & Related papers (2022-09-23T13:58:40Z) - Non-parametric Differentially Private Confidence Intervals for the
Median [3.205141100055992]
This paper proposes and evaluates several strategies to compute valid differentially private confidence intervals for the median.
We also illustrate that addressing both sources of uncertainty--the error from sampling and the error from protecting the output--should be preferred over simpler approaches that incorporate the uncertainty in a sequential fashion.
arXiv Detail & Related papers (2021-06-18T19:45:37Z) - Quantifying Uncertainty in Deep Spatiotemporal Forecasting [67.77102283276409]
We describe two types of forecasting problems: regular grid-based and graph-based.
We analyze UQ methods from both the Bayesian and the frequentist point view, casting in a unified framework via statistical decision theory.
Through extensive experiments on real-world road network traffic, epidemics, and air quality forecasting tasks, we reveal the statistical computational trade-offs for different UQ methods.
arXiv Detail & Related papers (2021-05-25T14:35:46Z) - Robustness Threats of Differential Privacy [70.818129585404]
We experimentally demonstrate that networks, trained with differential privacy, in some settings might be even more vulnerable in comparison to non-private versions.
We study how the main ingredients of differentially private neural networks training, such as gradient clipping and noise addition, affect the robustness of the model.
arXiv Detail & Related papers (2020-12-14T18:59:24Z) - Trust but Verify: Assigning Prediction Credibility by Counterfactual
Constrained Learning [123.3472310767721]
Prediction credibility measures are fundamental in statistics and machine learning.
These measures should account for the wide variety of models used in practice.
The framework developed in this work expresses the credibility as a risk-fit trade-off.
arXiv Detail & Related papers (2020-11-24T19:52:38Z) - CoinDICE: Off-Policy Confidence Interval Estimation [107.86876722777535]
We study high-confidence behavior-agnostic off-policy evaluation in reinforcement learning.
We show in a variety of benchmarks that the confidence interval estimates are tighter and more accurate than existing methods.
arXiv Detail & Related papers (2020-10-22T12:39:11Z) - Learning while Respecting Privacy and Robustness to Distributional
Uncertainties and Adversarial Data [66.78671826743884]
The distributionally robust optimization framework is considered for training a parametric model.
The objective is to endow the trained model with robustness against adversarially manipulated input data.
Proposed algorithms offer robustness with little overhead.
arXiv Detail & Related papers (2020-07-07T18:25:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.