Closure Properties for Private Classification and Online Prediction
- URL: http://arxiv.org/abs/2003.04509v3
- Date: Wed, 13 May 2020 03:50:06 GMT
- Title: Closure Properties for Private Classification and Online Prediction
- Authors: Noga Alon, Amos Beimel, Shay Moran, and Uri Stemmer
- Abstract summary: We derive closure properties for online learning and private PAC learning.
We show that any private algorithm that learns a class of functions $cH$ in the realizable case can be transformed to a private algorithm that learns the class $cH$ in the case.
- Score: 31.115241685486392
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Let~$\cH$ be a class of boolean functions and consider a {\it composed class}
$\cH'$ that is derived from~$\cH$ using some arbitrary aggregation rule (for
example, $\cH'$ may be the class of all 3-wise majority-votes of functions in
$\cH$). We upper bound the Littlestone dimension of~$\cH'$ in terms of that
of~$\cH$. As a corollary, we derive closure properties for online learning and
private PAC learning.
The derived bounds on the Littlestone dimension exhibit an undesirable
exponential dependence. For private learning, we prove close to optimal bounds
that circumvents this suboptimal dependency. The improved bounds on the sample
complexity of private learning are derived algorithmically via transforming a
private learner for the original class $\cH$ to a private learner for the
composed class~$\cH'$. Using the same ideas we show that any ({\em proper or
improper}) private algorithm that learns a class of functions $\cH$ in the
realizable case (i.e., when the examples are labeled by some function in the
class) can be transformed to a private algorithm that learns the class $\cH$ in
the agnostic case.
Related papers
- Optimized Tradeoffs for Private Prediction with Majority Ensembling [59.99331405291337]
We introduce the Data-dependent Randomized Response Majority (DaRRM) algorithm.
DaRRM is parameterized by a data-dependent noise function $gamma$, and enables efficient utility optimization over the class of all private algorithms.
We show that DaRRM provably enjoys a privacy gain of a factor of 2 over common baselines, with fixed utility.
arXiv Detail & Related papers (2024-11-27T00:48:48Z) - Simple online learning with consistent oracle [55.43220407902113]
We consider online learning in the model where a learning algorithm can access the class only via the emphconsistent oracle -- an oracle, that, at any moment, can give a function from the class that agrees with all examples seen so far.
arXiv Detail & Related papers (2023-08-15T21:50:40Z) - Differentially Private Nonparametric Regression Under a Growth Condition [9.416757363901295]
Given a real-valued hypothesis class $mathcalH$, we investigate under what conditions there is a differentially private algorithm which learns an optimal hypothesis.
We show that under the relaxed condition $lim inf_eta downarrow 0 eta cdot rm sfat_eta(mathcalH)$ = 0$, $mathcalH$ is privately learnable.
arXiv Detail & Related papers (2021-11-24T20:36:01Z) - FriendlyCore: Practical Differentially Private Aggregation [67.04951703461657]
We propose a simple and practical tool $mathsfFriendlyCore$ that takes a set of points $cal D$ from an unrestricted (pseudo) metric space as input.
When $cal D$ has effective diameter $r$, $mathsfFriendlyCore$ returns a "stable" subset $cal D_Gsubseteq cal D$ that includes all points.
$mathsfFriendlyCore$ can be used to preprocess the input before privately aggregating it, potentially simplifying the aggregation or boosting its accuracy
arXiv Detail & Related papers (2021-10-19T17:43:50Z) - Littlestone Classes are Privately Online Learnable [28.04975353867202]
We consider the problem of online classification under a privacy constraint.
In this setting a learner observes sequentially a stream of labelled examples $(x_t, y_t)$, for $1 leq t leq T$, and returns at each iteration a hypothesis $h_t$ which is used to predict the label of each new example $x_t$.
The learner's performance is measured by her regret against a known hypothesis class $mathcalH$.
arXiv Detail & Related papers (2021-06-25T09:08:33Z) - Optimal Regret Algorithm for Pseudo-1d Bandit Convex Optimization [51.23789922123412]
We study online learning with bandit feedback (i.e. learner has access to only zeroth-order oracle) where cost/reward functions admit a "pseudo-1d" structure.
We show a lower bound of $min(sqrtdT, T3/4)$ for the regret of any algorithm, where $T$ is the number of rounds.
We propose a new algorithm sbcalg that combines randomized online gradient descent with a kernelized exponential weights method to exploit the pseudo-1d structure effectively.
arXiv Detail & Related papers (2021-02-15T08:16:51Z) - Small Covers for Near-Zero Sets of Polynomials and Learning Latent
Variable Models [56.98280399449707]
We show that there exists an $epsilon$-cover for $S$ of cardinality $M = (k/epsilon)O_d(k1/d)$.
Building on our structural result, we obtain significantly improved learning algorithms for several fundamental high-dimensional probabilistic models hidden variables.
arXiv Detail & Related papers (2020-12-14T18:14:08Z) - Private Learning of Halfspaces: Simplifying the Construction and
Reducing the Sample Complexity [63.29100726064574]
We present a differentially private learner for halfspaces over a finite grid $G$ in $mathbbRd$ with sample complexity $approx d2.5cdot 2log*|G|$.
The building block for our learner is a new differentially private algorithm for approximately solving the linear feasibility problem.
arXiv Detail & Related papers (2020-04-16T16:12:10Z) - Differentially Private Release and Learning of Threshold Functions [27.612916837481485]
We prove new upper and lower bounds on the sample complexity of $(epsilon, delta)$ differentially private algorithms.
A threshold function $c_x$ over a totally ordered domain $X$ evaluates to $c_x(y) = 1$ if $y le x$, and evaluates to $0$ otherwise.
arXiv Detail & Related papers (2015-04-28T16:15:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.