Introducing the Huber mechanism for differentially private low-rank
matrix completion
- URL: http://arxiv.org/abs/2206.07910v1
- Date: Thu, 16 Jun 2022 04:33:06 GMT
- Title: Introducing the Huber mechanism for differentially private low-rank
matrix completion
- Authors: R Adithya Gowtham, Gokularam M, Thulasi Tholeti, Sheetal Kalyani
- Abstract summary: We propose a novel noise addition mechanism for preserving differential privacy.
The proposed Huber mechanism is evaluated against existing differential privacy mechanisms.
We prove that the proposed mechanism achieves epsilon-differential privacy similar to the Laplace mechanism.
- Score: 9.944551494217075
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Performing low-rank matrix completion with sensitive user data calls for
privacy-preserving approaches. In this work, we propose a novel noise addition
mechanism for preserving differential privacy where the noise distribution is
inspired by Huber loss, a well-known loss function in robust statistics. The
proposed Huber mechanism is evaluated against existing differential privacy
mechanisms while solving the matrix completion problem using the Alternating
Least Squares approach. We also propose using the Iteratively Re-Weighted Least
Squares algorithm to complete low-rank matrices and study the performance of
different noise mechanisms in both synthetic and real datasets. We prove that
the proposed mechanism achieves {\epsilon}-differential privacy similar to the
Laplace mechanism. Furthermore, empirical results indicate that the Huber
mechanism outperforms Laplacian and Gaussian in some cases and is comparable,
otherwise.
Related papers
- Less is More: Revisiting the Gaussian Mechanism for Differential Privacy [8.89234867625102]
Differential privacy via output perturbation has been a de facto standard for releasing query or computation results on sensitive data.
We identify that all existing Gaussian mechanisms suffer from the curse of full-rank covariance matrices.
arXiv Detail & Related papers (2023-06-04T04:14:38Z) - On User-Level Private Convex Optimization [59.75368670035683]
We introduce a new mechanism for convex optimization (SCO) with user-level differential privacy guarantees.
Our mechanism does not require any smoothness assumptions on the loss.
Our bounds are the first where the minimum number of users needed for user-level privacy has no dependence on the dimension.
arXiv Detail & Related papers (2023-05-08T17:47:28Z) - Breaking the Communication-Privacy-Accuracy Tradeoff with
$f$-Differential Privacy [51.11280118806893]
We consider a federated data analytics problem in which a server coordinates the collaborative data analysis of multiple users with privacy concerns and limited communication capability.
We study the local differential privacy guarantees of discrete-valued mechanisms with finite output space through the lens of $f$-differential privacy (DP)
More specifically, we advance the existing literature by deriving tight $f$-DP guarantees for a variety of discrete-valued mechanisms.
arXiv Detail & Related papers (2023-02-19T16:58:53Z) - Differential Privacy with Higher Utility by Exploiting Coordinate-wise Disparity: Laplace Mechanism Can Beat Gaussian in High Dimensions [9.20186865054847]
We study the i.n.i.d. Gaussian and Laplace mechanisms and obtain the conditions under which these mechanisms guarantee privacy.
We show how the i.n.i.d. noise can improve the performance in private (a) coordinate descent, (b) principal component analysis, and (c) deep learning with group clipping.
arXiv Detail & Related papers (2023-02-07T14:54:20Z) - General Gaussian Noise Mechanisms and Their Optimality for Unbiased Mean
Estimation [58.03500081540042]
A classical approach to private mean estimation is to compute the true mean and add unbiased, but possibly correlated, Gaussian noise to it.
We show that for every input dataset, an unbiased mean estimator satisfying concentrated differential privacy introduces approximately at least as much error.
arXiv Detail & Related papers (2023-01-31T18:47:42Z) - Regression with Label Differential Privacy [64.21020761920322]
We derive a label DP randomization mechanism that is optimal under a given regression loss function.
We prove that the optimal mechanism takes the form of a "randomized response on bins"
arXiv Detail & Related papers (2022-12-12T17:41:32Z) - Brownian Noise Reduction: Maximizing Privacy Subject to Accuracy
Constraints [53.01656650117495]
There is a disconnect between how researchers and practitioners handle privacy-utility tradeoffs.
Brownian mechanism works by first adding Gaussian noise of high variance corresponding to the final point of a simulated Brownian motion.
We complement our Brownian mechanism with ReducedAboveThreshold, a generalization of the classical AboveThreshold algorithm.
arXiv Detail & Related papers (2022-06-15T01:43:37Z) - Additive Logistic Mechanism for Privacy-Preserving Self-Supervised
Learning [26.783944764936994]
We study the privacy risks that are associated with training a neural network's weights with self-supervised learning algorithms.
We design a post-training privacy-protection algorithm that adds noise to the fine-tuned weights.
We show that the proposed protection algorithm can effectively reduce the attack accuracy to roughly 50%-equivalent to random guessing.
arXiv Detail & Related papers (2022-05-25T01:33:52Z) - Learning Numeric Optimal Differentially Private Truncated Additive
Mechanisms [5.079561894598125]
We introduce a tool to learn truncated noise for additive mechanisms with strong utility bounds.
We show that it is sufficient to consider symmetric and thatnew, for from the mean monotonically falling noise.
For sensitivity bounded mechanisms, we show that it is sufficient to consider symmetric and thatnew, for from the mean monotonically falling noise.
arXiv Detail & Related papers (2021-07-27T17:22:57Z) - Robust Compressed Sensing using Generative Models [98.64228459705859]
In this paper we propose an algorithm inspired by the Median-of-Means (MOM)
Our algorithm guarantees recovery for heavy-tailed data, even in the presence of outliers.
arXiv Detail & Related papers (2020-06-16T19:07:41Z) - Tight Differential Privacy for Discrete-Valued Mechanisms and for the
Subsampled Gaussian Mechanism Using FFT [6.929834518749884]
We propose a numerical accountant for evaluating the tight $(varepsilon,delta)$-privacy loss for algorithms with discrete one dimensional output.
We show that our approach allows decreasing noise variance up to 75 percent at equal privacy compared to existing bounds in the literature.
arXiv Detail & Related papers (2020-06-12T12:46:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.