Empirical Risk Minimization in the Non-interactive Local Model of
Differential Privacy
- URL: http://arxiv.org/abs/2011.05934v1
- Date: Wed, 11 Nov 2020 17:48:00 GMT
- Title: Empirical Risk Minimization in the Non-interactive Local Model of
Differential Privacy
- Authors: Di Wang and Marco Gaboardi and Adam Smith and Jinhui Xu
- Abstract summary: We study the Empirical Risk Minimization (ERM) problem in the noninteractive Local Differential Privacy (LDP) model.
Previous research indicates that the sample complexity, to achieve error $alpha, needs to be depending on dimensionality $p$ for general loss functions.
- Score: 26.69391745812235
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: In this paper, we study the Empirical Risk Minimization (ERM) problem in the
non-interactive Local Differential Privacy (LDP) model. Previous research on
this problem \citep{smith2017interaction} indicates that the sample complexity,
to achieve error $\alpha$, needs to be exponentially depending on the
dimensionality $p$ for general loss functions. In this paper, we make two
attempts to resolve this issue by investigating conditions on the loss
functions that allow us to remove such a limit. In our first attempt, we show
that if the loss function is $(\infty, T)$-smooth, by using the Bernstein
polynomial approximation we can avoid the exponential dependency in the term of
$\alpha$. We then propose player-efficient algorithms with $1$-bit
communication complexity and $O(1)$ computation cost for each player. The error
bound of these algorithms is asymptotically the same as the original one. With
some additional assumptions, we also give an algorithm which is more efficient
for the server. In our second attempt, we show that for any $1$-Lipschitz
generalized linear convex loss function, there is an $(\epsilon, \delta)$-LDP
algorithm whose sample complexity for achieving error $\alpha$ is only linear
in the dimensionality $p$. Our results use a polynomial of inner product
approximation technique. Finally, motivated by the idea of using polynomial
approximation and based on different types of polynomial approximations, we
propose (efficient) non-interactive locally differentially private algorithms
for learning the set of k-way marginal queries and the set of smooth queries.
Related papers
- Private Geometric Median [10.359525525715421]
We study differentially private (DP) algorithms for computing the geometric median (GM) of a dataset.
Our main contribution is a pair of DP algorithms for the task of private GM with an excess error guarantee that scales with the effective diameter of the datapoints.
arXiv Detail & Related papers (2024-06-11T16:13:09Z) - Robust Approximation Algorithms for Non-monotone $k$-Submodular
Maximization under a Knapsack Constraint [0.0]
Two deterministic approximation algorithms are presented for the problem of non-monotone $k$-submodular complexity under a knapsack constraint.
Our algorithms provide constant approximation ratios within only $O(nk)$ query complexity for the non-monotone objective.
arXiv Detail & Related papers (2023-09-21T12:42:52Z) - Certified Multi-Fidelity Zeroth-Order Optimization [4.450536872346658]
We consider the problem of multi-fidelity zeroth-order optimization, where one can evaluate a function $f$ at various approximation levels.
We propose a certified variant of the MFDOO algorithm and derive a bound on its cost complexity for any Lipschitz function $f$.
We also prove an $f$-dependent lower bound showing that this algorithm has a near-optimal cost complexity.
arXiv Detail & Related papers (2023-08-02T07:20:37Z) - Efficiently Learning One-Hidden-Layer ReLU Networks via Schur
Polynomials [50.90125395570797]
We study the problem of PAC learning a linear combination of $k$ ReLU activations under the standard Gaussian distribution on $mathbbRd$ with respect to the square loss.
Our main result is an efficient algorithm for this learning task with sample and computational complexity $(dk/epsilon)O(k)$, whereepsilon>0$ is the target accuracy.
arXiv Detail & Related papers (2023-07-24T14:37:22Z) - Linear Query Approximation Algorithms for Non-monotone Submodular
Maximization under Knapsack Constraint [16.02833173359407]
This work introduces two constant factor approximation algorithms with linear query complexity for non-monotone submodular over a ground set of size $n$ subject to a knapsack constraint.
$mathsfDLA$ is a deterministic algorithm that provides an approximation factor of $6+epsilon$ while $mathsfRLA$ is a randomized algorithm with an approximation factor of $4+epsilon$.
arXiv Detail & Related papers (2023-05-17T15:27:33Z) - Provably Efficient Reinforcement Learning via Surprise Bound [66.15308700413814]
We propose a provably efficient reinforcement learning algorithm (both computationally and statistically) with general value function approximations.
Our algorithm achieves reasonable regret bounds when applied to both the linear setting and the sparse high-dimensional linear setting.
arXiv Detail & Related papers (2023-02-22T20:21:25Z) - Pseudonorm Approachability and Applications to Regret Minimization [73.54127663296906]
We convert high-dimensional $ell_infty$-approachability problems to low-dimensional pseudonorm approachability problems.
We develop an algorithmic theory of pseudonorm approachability, analogous to previous work on approachability for $ell$ and other norms.
arXiv Detail & Related papers (2023-02-03T03:19:14Z) - Differentially-Private Hierarchical Clustering with Provable
Approximation Guarantees [79.59010418610625]
We study differentially private approximation algorithms for hierarchical clustering.
We show strong lower bounds for the problem: that any $epsilon$-DP algorithm must exhibit $O(|V|2/ epsilon)$-additive error for an input dataset.
We propose a private $1+o(1)$ approximation algorithm which also recovers the blocks exactly.
arXiv Detail & Related papers (2023-01-31T19:14:30Z) - Best Policy Identification in Linear MDPs [70.57916977441262]
We investigate the problem of best identification in discounted linear Markov+Delta Decision in the fixed confidence setting under a generative model.
The lower bound as the solution of an intricate non- optimization program can be used as the starting point to devise such algorithms.
arXiv Detail & Related papers (2022-08-11T04:12:50Z) - Model-Free Reinforcement Learning: from Clipped Pseudo-Regret to Sample
Complexity [59.34067736545355]
Given an MDP with $S$ states, $A$ actions, the discount factor $gamma in (0,1)$, and an approximation threshold $epsilon > 0$, we provide a model-free algorithm to learn an $epsilon$-optimal policy.
For small enough $epsilon$, we show an improved algorithm with sample complexity.
arXiv Detail & Related papers (2020-06-06T13:34:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.