Differentially Private Quasi-Concave Optimization: Bypassing the Lower Bound and Application to Geometric Problems
- URL: http://arxiv.org/abs/2504.19001v1
- Date: Sat, 26 Apr 2025 19:04:00 GMT
- Title: Differentially Private Quasi-Concave Optimization: Bypassing the Lower Bound and Application to Geometric Problems
- Authors: Kobbi Nissim, Eliad Tsfadia, Chao Yan,
- Abstract summary: We study the sample complexity of differentially private optimization of quasi-concave functions.<n>We show that the lower bound can be bypassed for a series of natural'' problems.
- Score: 10.228439000828722
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: We study the sample complexity of differentially private optimization of quasi-concave functions. For a fixed input domain $\mathcal{X}$, Cohen et al. (STOC 2023) proved that any generic private optimizer for low sensitive quasi-concave functions must have sample complexity $\Omega(2^{\log^*|\mathcal{X}|})$. We show that the lower bound can be bypassed for a series of ``natural'' problems. We define a new class of \emph{approximated} quasi-concave functions, and present a generic differentially private optimizer for approximated quasi-concave functions with sample complexity $\tilde{O}(\log^*|\mathcal{X}|)$. As applications, we use our optimizer to privately select a center point of points in $d$ dimensions and \emph{probably approximately correct} (PAC) learn $d$-dimensional halfspaces. In previous works, Bun et al. (FOCS 2015) proved a lower bound of $\Omega(\log^*|\mathcal{X}|)$ for both problems. Beimel et al. (COLT 2019) and Kaplan et al. (NeurIPS 2020) gave an upper bound of $\tilde{O}(d^{2.5}\cdot 2^{\log^*|\mathcal{X}|})$ for the two problems, respectively. We improve the dependency of the upper bounds on the cardinality of the domain by presenting a new upper bound of $\tilde{O}(d^{5.5}\cdot\log^*|\mathcal{X}|)$ for both problems. To the best of our understanding, this is the first work to reduce the sample complexity dependency on $|\mathcal{X}|$ for these two problems from exponential in $\log^* |\mathcal{X}|$ to $\log^* |\mathcal{X}|$.
Related papers
- The Space Complexity of Approximating Logistic Loss [11.338399194998933]
We prove a general $tildeOmega(dcdot mu_mathbfy(mathbfX))$ space lower bound when $epsilon$ is constant.<n>We also refute a prior conjecture that $mu_mathbfy(mathbfX)$ is hard to compute.
arXiv Detail & Related papers (2024-12-03T18:11:37Z) - The Communication Complexity of Approximating Matrix Rank [50.6867896228563]
We show that this problem has randomized communication complexity $Omega(frac1kcdot n2log|mathbbF|)$.
As an application, we obtain an $Omega(frac1kcdot n2log|mathbbF|)$ space lower bound for any streaming algorithm with $k$ passes.
arXiv Detail & Related papers (2024-10-26T06:21:42Z) - Complexity of Minimizing Projected-Gradient-Dominated Functions with Stochastic First-order Oracles [38.45952947660789]
This work investigates the performance limits of projected first-order methods for minimizing functions under the $(alpha,tau,mathcal)$-projected-dominance property.
arXiv Detail & Related papers (2024-08-03T18:34:23Z) - Partially Unitary Learning [0.0]
An optimal mapping between Hilbert spaces $IN$ of $left|psirightrangle$ and $OUT$ of $left|phirightrangle$ is presented.
An iterative algorithm for finding the global maximum of this optimization problem is developed.
arXiv Detail & Related papers (2024-05-16T17:13:55Z) - Optimal and Efficient Algorithms for Decentralized Online Convex Optimization [51.00357162913229]
Decentralized online convex optimization (D-OCO) is designed to minimize a sequence of global loss functions using only local computations and communications.<n>We develop a novel D-OCO algorithm that can reduce the regret bounds for convex and strongly convex functions to $tildeO(nrho-1/4sqrtT)$ and $tildeO(nrho-1/2log T)$.<n>Our analysis reveals that the projection-free variant can achieve $O(nT3/4)$ and $O(n
arXiv Detail & Related papers (2024-02-14T13:44:16Z) - Dueling Optimization with a Monotone Adversary [35.850072415395395]
We study the problem of dueling optimization with a monotone adversary, which is a generalization of (noiseless) dueling convex optimization.
The goal is to design an online algorithm to find a minimizer $mathbfx*$ for a function $fcolon X to mathbbRd.
arXiv Detail & Related papers (2023-11-18T23:55:59Z) - \~Optimal Differentially Private Learning of Thresholds and
Quasi-Concave Optimization [23.547605262139957]
The problem of learning threshold functions is a fundamental one in machine learning.
We provide a nearly tight upper bound of $tildeO(log* |X|)1.5)$ by Kaplan et al.
We also provide matching upper and lower bounds of $tildeTheta(2log*|X|)$ for the additive error of private quasi-concave (a related and more general problem)
arXiv Detail & Related papers (2022-11-11T18:16:46Z) - Private Isotonic Regression [54.32252900997422]
We consider the problem of isotonic regression over a partially ordered set (poset) $mathcalX$ and for any Lipschitz loss function.
We obtain a pure-DP algorithm that has an expected excess empirical risk of roughly $mathrmwidth(mathcalX) cdot log|mathcalX| / n$, where $mathrmwidth(mathcalX)$ is the width of the poset.
We show that the bounds above are essentially the best that can be
arXiv Detail & Related papers (2022-10-27T05:08:07Z) - Learning a Single Neuron with Adversarial Label Noise via Gradient
Descent [50.659479930171585]
We study a function of the form $mathbfxmapstosigma(mathbfwcdotmathbfx)$ for monotone activations.
The goal of the learner is to output a hypothesis vector $mathbfw$ that $F(mathbbw)=C, epsilon$ with high probability.
arXiv Detail & Related papers (2022-06-17T17:55:43Z) - The First Optimal Algorithm for Smooth and
Strongly-Convex-Strongly-Concave Minimax Optimization [88.91190483500932]
In this paper, we revisit the smooth and strongly-strongly-concave minimax optimization problem.
Existing state-of-the-art methods do not match lower bound $Omegaleft(sqrtkappa_xkappa_ylog3 (kappa_xkappa_y)logfrac1epsilonright)$.
We fix this fundamental issue by providing the first algorithm with $mathcalOleft( sqrtkappa_xkappa_ylog
arXiv Detail & Related papers (2022-05-11T17:33:07Z) - DIPPA: An improved Method for Bilinear Saddle Point Problems [18.65143269806133]
This paper deals with point dependency problems $min_bfx max_bfy g(fracx) + bfxtop bfbftop fracbfA kappa_x kappa_x (kappa_x + kappa_y) kappa_y (kappa_x + kappa_y) kappa_y (kappa_x + kappa_y)
arXiv Detail & Related papers (2021-03-15T10:55:30Z) - On the Complexity of Minimizing Convex Finite Sums Without Using the
Indices of the Individual Functions [62.01594253618911]
We exploit the finite noise structure of finite sums to derive a matching $O(n2)$-upper bound under the global oracle model.
Following a similar approach, we propose a novel adaptation of SVRG which is both emphcompatible with oracles, and achieves complexity bounds of $tildeO(n2+nsqrtL/mu)log (1/epsilon)$ and $O(nsqrtL/epsilon)$, for $mu>0$ and $mu=0$
arXiv Detail & Related papers (2020-02-09T03:39:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.