On User-Level Private Convex Optimization
- URL: http://arxiv.org/abs/2305.04912v1
- Date: Mon, 8 May 2023 17:47:28 GMT
- Title: On User-Level Private Convex Optimization
- Authors: Badih Ghazi and Pritish Kamath and Ravi Kumar and Raghu Meka and Pasin
Manurangsi and Chiyuan Zhang
- Abstract summary: We introduce a new mechanism for convex optimization (SCO) with user-level differential privacy guarantees.
Our mechanism does not require any smoothness assumptions on the loss.
Our bounds are the first where the minimum number of users needed for user-level privacy has no dependence on the dimension.
- Score: 59.75368670035683
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce a new mechanism for stochastic convex optimization (SCO) with
user-level differential privacy guarantees. The convergence rates of this
mechanism are similar to those in the prior work of Levy et al. (2021);
Narayanan et al. (2022), but with two important improvements. Our mechanism
does not require any smoothness assumptions on the loss. Furthermore, our
bounds are also the first where the minimum number of users needed for
user-level privacy has no dependence on the dimension and only a logarithmic
dependence on the desired excess error. The main idea underlying the new
mechanism is to show that the optimizers of strongly convex losses have low
local deletion sensitivity, along with an output perturbation method for
functions with low local deletion sensitivity, which could be of independent
interest.
Related papers
- Robust support vector machines via conic optimization [0.8594140167290099]
We consider the problem of convex computational machines robust to uncertainty.
We show that the proposed estimator is competitive with the SVMs with the loss in outlier-free and better in the presence of outliers.
arXiv Detail & Related papers (2024-02-02T05:42:50Z) - Bounded and Unbiased Composite Differential Privacy [25.427802467876248]
The objective of differential privacy (DP) is to protect privacy by producing an output distribution that is indistinguishable between two neighboring databases.
Existing solutions attempt to address this issue by employing post-processing or truncation techniques.
We propose a novel differentially private mechanism which uses a composite probability density function to generate bounded and unbiased outputs.
arXiv Detail & Related papers (2023-11-04T04:43:47Z) - Using Property Elicitation to Understand the Impacts of Fairness
Regularizers [0.32634122554914]
We show that it is not well-understood which regularizers change the minimizer of the loss, and, when the minimizer does change, how it changes.
We empirically demonstrate how algorithmic decision-making changes as a function of both data distribution changes and hardness of the constraints.
arXiv Detail & Related papers (2023-09-20T14:20:56Z) - Differential Privacy via Distributionally Robust Optimization [8.409434654561789]
We develop a class of mechanisms that enjoy non-asymptotic and unconditional optimality guarantees.
Our upper (primal) bounds correspond to implementable perturbations whose suboptimality can be bounded by our lower (dual) bounds.
Our numerical experiments demonstrate that our perturbations can outperform the previously best results from the literature on artificial as well as standard benchmark problems.
arXiv Detail & Related papers (2023-04-25T09:31:47Z) - Trust your neighbours: Penalty-based constraints for model calibration [19.437451462590108]
We present a constrained optimization perspective of SVLS and demonstrate that it enforces an implicit constraint on soft class proportions of surrounding pixels.
We propose a principled and simple solution based on equality constraints on the logit values, which enables to control explicitly both the enforced constraint and the weight of the penalty.
arXiv Detail & Related papers (2023-03-11T01:10:26Z) - Differentially Private Stochastic Gradient Descent with Low-Noise [49.981789906200035]
Modern machine learning algorithms aim to extract fine-grained information from data to provide accurate predictions, which often conflicts with the goal of privacy protection.
This paper addresses the practical and theoretical importance of developing privacy-preserving machine learning algorithms that ensure good performance while preserving privacy.
arXiv Detail & Related papers (2022-09-09T08:54:13Z) - Brownian Noise Reduction: Maximizing Privacy Subject to Accuracy
Constraints [53.01656650117495]
There is a disconnect between how researchers and practitioners handle privacy-utility tradeoffs.
Brownian mechanism works by first adding Gaussian noise of high variance corresponding to the final point of a simulated Brownian motion.
We complement our Brownian mechanism with ReducedAboveThreshold, a generalization of the classical AboveThreshold algorithm.
arXiv Detail & Related papers (2022-06-15T01:43:37Z) - The Probabilistic Normal Epipolar Constraint for Frame-To-Frame Rotation
Optimization under Uncertain Feature Positions [53.478856119297284]
We introduce the probabilistic normal epipolar constraint (PNEC) that overcomes the limitation by accounting for anisotropic and inhomogeneous uncertainties in the feature positions.
In experiments on synthetic data, we demonstrate that the novel PNEC yields more accurate rotation estimates than the original NEC.
We integrate the proposed method into a state-of-the-art monocular rotation-only odometry system and achieve consistently improved results for the real-world KITTI dataset.
arXiv Detail & Related papers (2022-04-05T14:47:11Z) - An automatic differentiation system for the age of differential privacy [65.35244647521989]
Tritium is an automatic differentiation-based sensitivity analysis framework for differentially private (DP) machine learning (ML)
We introduce Tritium, an automatic differentiation-based sensitivity analysis framework for differentially private (DP) machine learning (ML)
arXiv Detail & Related papers (2021-09-22T08:07:42Z) - Bilevel Optimization for Differentially Private Optimization in Energy
Systems [53.806512366696275]
This paper studies how to apply differential privacy to constrained optimization problems whose inputs are sensitive.
The paper shows that, under a natural assumption, a bilevel model can be solved efficiently for large-scale nonlinear optimization problems.
arXiv Detail & Related papers (2020-01-26T20:15:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.