Private Gradient Descent for Linear Regression: Tighter Error Bounds and
Instance-Specific Uncertainty Estimation
- URL: http://arxiv.org/abs/2402.13531v1
- Date: Wed, 21 Feb 2024 04:58:41 GMT
- Title: Private Gradient Descent for Linear Regression: Tighter Error Bounds and
Instance-Specific Uncertainty Estimation
- Authors: Gavin Brown, Krishnamurthy Dvijotham, Georgina Evans, Daogao Liu, Adam
Smith, Abhradeep Thakurta
- Abstract summary: We provide an improved analysis of standard differentially private gradient descent for linear regression under the squared error loss.
Our analysis leads to new results on the algorithm's accuracy.
- Score: 17.84129947587373
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We provide an improved analysis of standard differentially private gradient
descent for linear regression under the squared error loss. Under modest
assumptions on the input, we characterize the distribution of the iterate at
each time step.
Our analysis leads to new results on the algorithm's accuracy: for a proper
fixed choice of hyperparameters, the sample complexity depends only linearly on
the dimension of the data. This matches the dimension-dependence of the
(non-private) ordinary least squares estimator as well as that of recent
private algorithms that rely on sophisticated adaptive gradient-clipping
schemes (Varshney et al., 2022; Liu et al., 2023).
Our analysis of the iterates' distribution also allows us to construct
confidence intervals for the empirical optimizer which adapt automatically to
the variance of the algorithm on a particular data set. We validate our
theorems through experiments on synthetic data.
Related papers
- The High Line: Exact Risk and Learning Rate Curves of Stochastic Adaptive Learning Rate Algorithms [8.681909776958184]
We develop a framework for analyzing the training and learning rate dynamics on a large class of high-dimensional optimization problems.
We give exact expressions for the risk and learning rate curves in terms of a deterministic solution to a system of ODEs.
We investigate in detail two adaptive learning rates -- an idealized exact line search and AdaGrad-Norm on the least squares problem.
arXiv Detail & Related papers (2024-05-30T00:27:52Z) - Differentially Private Optimization with Sparse Gradients [60.853074897282625]
We study differentially private (DP) optimization problems under sparsity of individual gradients.
Building on this, we obtain pure- and approximate-DP algorithms with almost optimal rates for convex optimization with sparse gradients.
arXiv Detail & Related papers (2024-04-16T20:01:10Z) - Improved Differentially Private Regression via Gradient Boosting [38.09948758699131]
We give a new algorithm for private linear regression based on gradient boosting.
In addition to a comprehensive set of experiments, we give theoretical insights to explain this behavior.
arXiv Detail & Related papers (2023-03-06T19:20:52Z) - Optimistic Rates: A Unifying Theory for Interpolation Learning and
Regularization in Linear Regression [35.78863301525758]
We study a localized notion of uniform convergence known as an "optimistic rate"
Our refined analysis avoids the hidden constant and logarithmic factor in existing results.
arXiv Detail & Related papers (2021-12-08T18:55:00Z) - Near-optimal inference in adaptive linear regression [60.08422051718195]
Even simple methods like least squares can exhibit non-normal behavior when data is collected in an adaptive manner.
We propose a family of online debiasing estimators to correct these distributional anomalies in at least squares estimation.
We demonstrate the usefulness of our theory via applications to multi-armed bandit, autoregressive time series estimation, and active learning with exploration.
arXiv Detail & Related papers (2021-07-05T21:05:11Z) - Optimal Rates for Random Order Online Optimization [60.011653053877126]
We study the citetgarber 2020online, where the loss functions may be chosen by an adversary, but are then presented online in a uniformly random order.
We show that citetgarber 2020online algorithms achieve the optimal bounds and significantly improve their stability.
arXiv Detail & Related papers (2021-06-29T09:48:46Z) - Benign Overfitting of Constant-Stepsize SGD for Linear Regression [122.70478935214128]
inductive biases are central in preventing overfitting empirically.
This work considers this issue in arguably the most basic setting: constant-stepsize SGD for linear regression.
We reflect on a number of notable differences between the algorithmic regularization afforded by (unregularized) SGD in comparison to ordinary least squares.
arXiv Detail & Related papers (2021-03-23T17:15:53Z) - Understanding Implicit Regularization in Over-Parameterized Single Index
Model [55.41685740015095]
We design regularization-free algorithms for the high-dimensional single index model.
We provide theoretical guarantees for the induced implicit regularization phenomenon.
arXiv Detail & Related papers (2020-07-16T13:27:47Z) - Asymptotic Analysis of an Ensemble of Randomly Projected Linear
Discriminants [94.46276668068327]
In [1], an ensemble of randomly projected linear discriminants is used to classify datasets.
We develop a consistent estimator of the misclassification probability as an alternative to the computationally-costly cross-validation estimator.
We also demonstrate the use of our estimator for tuning the projection dimension on both real and synthetic data.
arXiv Detail & Related papers (2020-04-17T12:47:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.