Parametric Bootstrap for Differentially Private Confidence Intervals
- URL: http://arxiv.org/abs/2006.07749v2
- Date: Tue, 12 Oct 2021 15:02:53 GMT
- Title: Parametric Bootstrap for Differentially Private Confidence Intervals
- Authors: Cecilia Ferrando, Shufan Wang, Daniel Sheldon
- Abstract summary: We develop a practical and general-purpose approach to construct confidence intervals for differentially private parametric estimation.
We find that the parametric bootstrap is a simple and effective solution.
- Score: 8.781431682774484
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The goal of this paper is to develop a practical and general-purpose approach
to construct confidence intervals for differentially private parametric
estimation. We find that the parametric bootstrap is a simple and effective
solution. It cleanly reasons about variability of both the data sample and the
randomized privacy mechanism and applies "out of the box" to a wide class of
private estimation routines. It can also help correct bias caused by clipping
data to limit sensitivity. We prove that the parametric bootstrap gives
consistent confidence intervals in two broadly relevant settings, including a
novel adaptation to linear regression that avoids accessing the covariate data
multiple times. We demonstrate its effectiveness for a variety of estimators,
and find that it provides confidence intervals with good coverage even at
modest sample sizes and performs better than alternative approaches.
Related papers
- Optimal Baseline Corrections for Off-Policy Contextual Bandits [61.740094604552475]
We aim to learn decision policies that optimize an unbiased offline estimate of an online reward metric.
We propose a single framework built on their equivalence in learning scenarios.
Our framework enables us to characterize the variance-optimal unbiased estimator and provide a closed-form solution for it.
arXiv Detail & Related papers (2024-05-09T12:52:22Z) - Resampling methods for private statistical inference [1.8110941972682346]
We consider the task of constructing confidence intervals with differential privacy.
We propose two private variants of the non-parametric bootstrap, which privately compute the median of the results of multiple "little" bootstraps run on partitions of the data.
For a fixed differential privacy parameter $epsilon$, our methods enjoy the same error rates as that of the non-private bootstrap to within logarithmic factors in the sample size $n$.
arXiv Detail & Related papers (2024-02-11T08:59:02Z) - Finite Sample Confidence Regions for Linear Regression Parameters Using
Arbitrary Predictors [1.6860963320038902]
We explore a novel methodology for constructing confidence regions for parameters of linear models, using predictions from any arbitrary predictor.
The derived confidence regions can be cast as constraints within a Mixed Linear Programming framework, enabling optimisation of linear objectives.
Unlike previous methods, the confidence region can be empty, which can be used for hypothesis testing.
arXiv Detail & Related papers (2024-01-27T00:15:48Z) - Show Your Work with Confidence: Confidence Bands for Tuning Curves [51.12106543561089]
tuning curves plot validation performance as a function of tuning effort.
We present the first method to construct valid confidence bands for tuning curves.
We validate our design with ablations, analyze the effect of sample size, and provide guidance on comparing models with our method.
arXiv Detail & Related papers (2023-11-16T00:50:37Z) - Simulation-based, Finite-sample Inference for Privatized Data [14.218697973204065]
We propose a simulation-based "repro sample" approach to produce statistically valid confidence intervals and hypothesis tests.
We show that this methodology is applicable to a wide variety of private inference problems.
arXiv Detail & Related papers (2023-03-09T15:19:31Z) - Analyzing the Differentially Private Theil-Sen Estimator for Simple Linear Regression [0.9208007322096533]
We provide a rigorous, finite-sample analysis of DPTheilSen's privacy and accuracy properties.
We show how to produce differentially private confidence intervals to accompany its point estimates.
arXiv Detail & Related papers (2022-07-27T04:38:37Z) - Leveraging Unlabeled Data to Predict Out-of-Distribution Performance [63.740181251997306]
Real-world machine learning deployments are characterized by mismatches between the source (training) and target (test) distributions.
In this work, we investigate methods for predicting the target domain accuracy using only labeled source data and unlabeled target data.
We propose Average Thresholded Confidence (ATC), a practical method that learns a threshold on the model's confidence, predicting accuracy as the fraction of unlabeled examples.
arXiv Detail & Related papers (2022-01-11T23:01:12Z) - Scalable Marginal Likelihood Estimation for Model Selection in Deep
Learning [78.83598532168256]
Marginal-likelihood based model-selection is rarely used in deep learning due to estimation difficulties.
Our work shows that marginal likelihoods can improve generalization and be useful when validation data is unavailable.
arXiv Detail & Related papers (2021-04-11T09:50:24Z) - Differentially private inference via noisy optimization [3.015622397986615]
We show that robust statistics can be used in conjunction with noisy gradient descent or noisy Newton methods to obtain optimal private estimators.
We demonstrate the effectiveness of a bias correction that leads to enhanced small-sample empirical performance in simulations.
arXiv Detail & Related papers (2021-03-19T19:55:55Z) - CoinDICE: Off-Policy Confidence Interval Estimation [107.86876722777535]
We study high-confidence behavior-agnostic off-policy evaluation in reinforcement learning.
We show in a variety of benchmarks that the confidence interval estimates are tighter and more accurate than existing methods.
arXiv Detail & Related papers (2020-10-22T12:39:11Z) - Meta-Learned Confidence for Few-shot Learning [60.6086305523402]
A popular transductive inference technique for few-shot metric-based approaches, is to update the prototype of each class with the mean of the most confident query examples.
We propose to meta-learn the confidence for each query sample, to assign optimal weights to unlabeled queries.
We validate our few-shot learning model with meta-learned confidence on four benchmark datasets.
arXiv Detail & Related papers (2020-02-27T10:22:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.