Differentially Private Nonparametric Regression Under a Growth Condition
- URL: http://arxiv.org/abs/2111.12786v1
- Date: Wed, 24 Nov 2021 20:36:01 GMT
- Title: Differentially Private Nonparametric Regression Under a Growth Condition
- Authors: Noah Golowich
- Abstract summary: Given a real-valued hypothesis class $mathcalH$, we investigate under what conditions there is a differentially private algorithm which learns an optimal hypothesis.
We show that under the relaxed condition $lim inf_eta downarrow 0 eta cdot rm sfat_eta(mathcalH)$ = 0$, $mathcalH$ is privately learnable.
- Score: 9.416757363901295
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Given a real-valued hypothesis class $\mathcal{H}$, we investigate under what
conditions there is a differentially private algorithm which learns an optimal
hypothesis from $\mathcal{H}$ given i.i.d. data. Inspired by recent results for
the related setting of binary classification (Alon et al., 2019; Bun et al.,
2020), where it was shown that online learnability of a binary class is
necessary and sufficient for its private learnability, Jung et al. (2020)
showed that in the setting of regression, online learnability of $\mathcal{H}$
is necessary for private learnability. Here online learnability of
$\mathcal{H}$ is characterized by the finiteness of its $\eta$-sequential fat
shattering dimension, ${\rm sfat}_\eta(\mathcal{H})$, for all $\eta > 0$. In
terms of sufficient conditions for private learnability, Jung et al. (2020)
showed that $\mathcal{H}$ is privately learnable if $\lim_{\eta \downarrow 0}
{\rm sfat}_\eta(\mathcal{H})$ is finite, which is a fairly restrictive
condition. We show that under the relaxed condition $\lim \inf_{\eta \downarrow
0} \eta \cdot {\rm sfat}_\eta(\mathcal{H}) = 0$, $\mathcal{H}$ is privately
learnable, establishing the first nonparametric private learnability guarantee
for classes $\mathcal{H}$ with ${\rm sfat}_\eta(\mathcal{H})$ diverging as
$\eta \downarrow 0$. Our techniques involve a novel filtering procedure to
output stable hypotheses for nonparametric function classes.
Related papers
- Provably Efficient Reinforcement Learning with Multinomial Logit Function Approximation [67.8414514524356]
We study a new class of MDPs that employs multinomial logit (MNL) function approximation to ensure valid probability distributions over the state space.
Despite its benefits, introducing non-linear function approximation raises significant challenges in both computational and statistical efficiency.
arXiv Detail & Related papers (2024-05-27T11:31:54Z) - Online Distribution Learning with Local Private Constraints [35.20121852444787]
We study the problem of online conditional distribution estimation with emphunbounded label sets under local differential privacy.
We show that under $(epsilon,0)$-local differential privacy of the privatized labels, the KL-risk grows as $tildeTheta(frac1epsilonsqrtKT)$ upto poly-logarithmic factors.
arXiv Detail & Related papers (2024-02-01T03:56:48Z) - Estimation and Inference in Distributional Reinforcement Learning [28.253677740976197]
We show that a dataset of size $widetilde Oleft(frac|mathcalS||mathcalA|epsilon2 (1-gamma)4right)$ suffices to ensure the Kolmogorov metric and total variation metric between $hatetapi$ and $etapi$ is below $epsilon$ with high probability.
Our findings give rise to a unified approach to statistical inference of a wide class of statistical functionals of $etapi$.
arXiv Detail & Related papers (2023-09-29T14:14:53Z) - On the Optimality of Misspecified Kernel Ridge Regression [13.995944403996566]
We show that KRR is minimax optimal for any $sin (0,1)$ when the $mathcalH$ is a Sobolev RKHS.
arXiv Detail & Related papers (2023-05-12T04:12:12Z) - On Optimal Learning Under Targeted Data Poisoning [48.907813854832206]
In this work we aim to characterize the smallest achievable error $epsilon=epsilon(eta)$ by the learner in the presence of such an adversary.
Remarkably, we show that the upper bound can be attained by a deterministic learner.
arXiv Detail & Related papers (2022-10-06T06:49:48Z) - Littlestone Classes are Privately Online Learnable [28.04975353867202]
We consider the problem of online classification under a privacy constraint.
In this setting a learner observes sequentially a stream of labelled examples $(x_t, y_t)$, for $1 leq t leq T$, and returns at each iteration a hypothesis $h_t$ which is used to predict the label of each new example $x_t$.
The learner's performance is measured by her regret against a known hypothesis class $mathcalH$.
arXiv Detail & Related papers (2021-06-25T09:08:33Z) - Private Stochastic Convex Optimization: Optimal Rates in $\ell_1$
Geometry [69.24618367447101]
Up to logarithmic factors the optimal excess population loss of any $(varepsilon,delta)$-differently private is $sqrtlog(d)/n + sqrtd/varepsilon n.$
We show that when the loss functions satisfy additional smoothness assumptions, the excess loss is upper bounded (up to logarithmic factors) by $sqrtlog(d)/n + (log(d)/varepsilon n)2/3.
arXiv Detail & Related papers (2021-03-02T06:53:44Z) - Optimal Regret Algorithm for Pseudo-1d Bandit Convex Optimization [51.23789922123412]
We study online learning with bandit feedback (i.e. learner has access to only zeroth-order oracle) where cost/reward functions admit a "pseudo-1d" structure.
We show a lower bound of $min(sqrtdT, T3/4)$ for the regret of any algorithm, where $T$ is the number of rounds.
We propose a new algorithm sbcalg that combines randomized online gradient descent with a kernelized exponential weights method to exploit the pseudo-1d structure effectively.
arXiv Detail & Related papers (2021-02-15T08:16:51Z) - $Q$-learning with Logarithmic Regret [60.24952657636464]
We prove that an optimistic $Q$-learning enjoys a $mathcalOleft(fracSAcdot mathrmpolyleft(Hright)Delta_minlogleft(SATright)right)$ cumulative regret bound, where $S$ is the number of states, $A$ is the number of actions, $H$ is the planning horizon, $T$ is the total number of steps, and $Delta_min$ is the minimum sub-optimality gap.
arXiv Detail & Related papers (2020-06-16T13:01:33Z) - Agnostic Learning of a Single Neuron with Gradient Descent [92.7662890047311]
We consider the problem of learning the best-fitting single neuron as measured by the expected square loss.
For the ReLU activation, our population risk guarantee is $O(mathsfOPT1/2)+epsilon$.
For the ReLU activation, our population risk guarantee is $O(mathsfOPT1/2)+epsilon$.
arXiv Detail & Related papers (2020-05-29T07:20:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.