Differentially Private Generalized Linear Models Revisited
- URL: http://arxiv.org/abs/2205.03014v2
- Date: Wed, 6 Mar 2024 17:22:11 GMT
- Title: Differentially Private Generalized Linear Models Revisited
- Authors: Raman Arora, Raef Bassily, Crist\'obal Guzm\'an, Michael Menart,
Enayat Ullah
- Abstract summary: We study the problem of $(epsilon,delta)$-differentially private learning of linear predictors with convex losses.
In particular, we show a lower bound on the excess population risk of $tildeOleft(fracVert w*Vertsqrtn + minleftfracVert w*Vert4/3(nepsilon)2/3, fracsqrtdVert w*Vertnepsilon
- Score: 30.298342283075176
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study the problem of $(\epsilon,\delta)$-differentially private learning
of linear predictors with convex losses. We provide results for two subclasses
of loss functions. The first case is when the loss is smooth and non-negative
but not necessarily Lipschitz (such as the squared loss). For this case, we
establish an upper bound on the excess population risk of
$\tilde{O}\left(\frac{\Vert w^*\Vert}{\sqrt{n}} + \min\left\{\frac{\Vert w^*
\Vert^2}{(n\epsilon)^{2/3}},\frac{\sqrt{d}\Vert
w^*\Vert^2}{n\epsilon}\right\}\right)$, where $n$ is the number of samples, $d$
is the dimension of the problem, and $w^*$ is the minimizer of the population
risk. Apart from the dependence on $\Vert w^\ast\Vert$, our bound is
essentially tight in all parameters. In particular, we show a lower bound of
$\tilde{\Omega}\left(\frac{1}{\sqrt{n}} + {\min\left\{\frac{\Vert
w^*\Vert^{4/3}}{(n\epsilon)^{2/3}}, \frac{\sqrt{d}\Vert
w^*\Vert}{n\epsilon}\right\}}\right)$. We also revisit the previously studied
case of Lipschitz losses [SSTT20]. For this case, we close the gap in the
existing work and show that the optimal rate is (up to log factors)
$\Theta\left(\frac{\Vert w^*\Vert}{\sqrt{n}} + \min\left\{\frac{\Vert
w^*\Vert}{\sqrt{n\epsilon}},\frac{\sqrt{\text{rank}}\Vert
w^*\Vert}{n\epsilon}\right\}\right)$, where $\text{rank}$ is the rank of the
design matrix. This improves over existing work in the high privacy regime.
Finally, our algorithms involve a private model selection approach that we
develop to enable attaining the stated rates without a-priori knowledge of
$\Vert w^*\Vert$.
Related papers
- On the $O(\rac{\sqrt{d}}{K^{1/4}})$ Convergence Rate of AdamW Measured by $\ell_1$ Norm [54.28350823319057]
This paper establishes the convergence rate $frac1Ksum_k=1KEleft[|nabla f(xk)|_1right]leq O(fracsqrtdCK1/4) for AdamW measured by $ell_$ norm, where $K$ represents the iteration number, $d denotes the model dimension, and $C$ matches the constant in the optimal convergence rate of SGD.
arXiv Detail & Related papers (2025-05-17T05:02:52Z) - Low-rank Matrix Bandits with Heavy-tailed Rewards [55.03293214439741]
We study the problem of underlinelow-rank matrix bandit with underlineheavy-underlinetailed underlinerewards (LowHTR)
By utilizing the truncation on observed payoffs and the dynamic exploration, we propose a novel algorithm called LOTUS.
arXiv Detail & Related papers (2024-04-26T21:54:31Z) - $\ell_p$-Regression in the Arbitrary Partition Model of Communication [59.89387020011663]
We consider the randomized communication complexity of the distributed $ell_p$-regression problem in the coordinator model.
For $p = 2$, i.e., least squares regression, we give the first optimal bound of $tildeTheta(sd2 + sd/epsilon)$ bits.
For $p in (1,2)$,we obtain an $tildeO(sd2/epsilon + sd/mathrmpoly(epsilon)$ upper bound.
arXiv Detail & Related papers (2023-07-11T08:51:53Z) - Optimal Sketching Bounds for Sparse Linear Regression [116.30196615349226]
We study oblivious sketching for $k$-sparse linear regression under various loss functions such as an $ell_p$ norm, or from a broad class of hinge-like loss functions.
We show that for sparse $ell$vareps regression, there is an oblivious distribution over sketches with $Theta(klog(d/k)/varepsilon2)$ rows, which is tight up to a constant factor.
We also show that sketching $O(mu2 klog(mu n d/varepsilon)/var
arXiv Detail & Related papers (2023-04-05T07:24:19Z) - Improved Regret Bounds for Online Kernel Selection under Bandit Feedback [13.510889339096117]
We prove two types of regret bounds improving the previous bound.
We apply the two algorithms to online kernel selection with time and prove new regret bounds matching or improving the previous $O(sqrtTlnK +Vert fVert2_mathcalH_imaxsqrtT,fracTsqrtmathcalR)$ expected bound where $mathcalR$ is the time budget.
arXiv Detail & Related papers (2023-03-09T03:40:34Z) - Faster Rates of Convergence to Stationary Points in Differentially
Private Optimization [31.46358820048179]
We study the problem of approximating stationary points of Lipschitz and smooth functions under $(varepsilon,delta)$-differential privacy (DP)
A point $widehatw$ is called an $alpha$-stationary point of a function $mathbbRdrightarrowmathbbR$ if $|nabla F(widehatw)|leq alpha$.
We provide a new efficient algorithm that finds an $tildeObig(big[
arXiv Detail & Related papers (2022-06-02T02:43:44Z) - Low-Rank Approximation with $1/\epsilon^{1/3}$ Matrix-Vector Products [58.05771390012827]
We study iterative methods based on Krylov subspaces for low-rank approximation under any Schatten-$p$ norm.
Our main result is an algorithm that uses only $tildeO(k/sqrtepsilon)$ matrix-vector products.
arXiv Detail & Related papers (2022-02-10T16:10:41Z) - Differentially Private $\ell_1$-norm Linear Regression with Heavy-tailed
Data [22.233705161499273]
We focus on the $ell_1$-norm linear regression in the $epsilon$-DP model.
We show that it is possible to achieve an upper bound of $tildeO(sqrtfracdnepsilon)$ with high probability.
Our algorithms can also be extended to more relaxed cases where only each coordinate of the data has bounded moments.
arXiv Detail & Related papers (2022-01-10T08:17:05Z) - Faster Rates of Differentially Private Stochastic Convex Optimization [7.93728520583825]
We study the case where the population risk function satisfies the Tysbakov Noise Condition (TNC) with some parameter $theta>1$.
In the second part, we focus on a special case where the population risk function is strongly convex.
arXiv Detail & Related papers (2021-07-31T22:23:39Z) - Nearly Horizon-Free Offline Reinforcement Learning [97.36751930393245]
We revisit offline reinforcement learning on episodic time-homogeneous Markov Decision Processes with $S$ states, $A$ actions and planning horizon $H$.
We obtain the first set of nearly $H$-free sample complexity bounds for evaluation and planning using the empirical MDPs.
arXiv Detail & Related papers (2021-03-25T18:52:17Z) - Infinite-Horizon Offline Reinforcement Learning with Linear Function
Approximation: Curse of Dimensionality and Algorithm [46.36534144138337]
In this paper, we investigate the sample complexity of policy evaluation in offline reinforcement learning.
Under the low distribution shift assumption, we show that there is an algorithm that needs at most $Oleft(maxleft fracleftVert thetapirightVert _24varepsilon4logfracddelta,frac1varepsilon2left(d+logfrac1deltaright)right right)$ samples to approximate the
arXiv Detail & Related papers (2021-03-17T18:18:57Z) - Private Stochastic Convex Optimization: Optimal Rates in $\ell_1$
Geometry [69.24618367447101]
Up to logarithmic factors the optimal excess population loss of any $(varepsilon,delta)$-differently private is $sqrtlog(d)/n + sqrtd/varepsilon n.$
We show that when the loss functions satisfy additional smoothness assumptions, the excess loss is upper bounded (up to logarithmic factors) by $sqrtlog(d)/n + (log(d)/varepsilon n)2/3.
arXiv Detail & Related papers (2021-03-02T06:53:44Z) - Optimal Regret Algorithm for Pseudo-1d Bandit Convex Optimization [51.23789922123412]
We study online learning with bandit feedback (i.e. learner has access to only zeroth-order oracle) where cost/reward functions admit a "pseudo-1d" structure.
We show a lower bound of $min(sqrtdT, T3/4)$ for the regret of any algorithm, where $T$ is the number of rounds.
We propose a new algorithm sbcalg that combines randomized online gradient descent with a kernelized exponential weights method to exploit the pseudo-1d structure effectively.
arXiv Detail & Related papers (2021-02-15T08:16:51Z) - Revisiting EXTRA for Smooth Distributed Optimization [70.65867695317633]
We give a sharp complexity analysis for EXTRA with the improved $Oleft(left(fracLmu+frac11-sigma_2(W)right)logfrac1epsilon (1-sigma_2(W))right)$.
Our communication complexities of the accelerated EXTRA are only worse by the factors of $left(logfracLmu (1-sigma_2(W))right)$ and $left(logfrac1epsilon (1-
arXiv Detail & Related papers (2020-02-24T08:07:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.