Differential Privacy with Random Projections and Sign Random Projections
- URL: http://arxiv.org/abs/2306.01751v2
- Date: Tue, 13 Jun 2023 10:08:10 GMT
- Title: Differential Privacy with Random Projections and Sign Random Projections
- Authors: Ping Li and Xiaoyun Li
- Abstract summary: iDP-SignRP is remarkably effective under the setting of individual differential privacy'' (iDP)
DP-SignOPORP considerably improves existing algorithms under the standard DP setting.
- Score: 37.6593006747285
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we develop a series of differential privacy (DP) algorithms
from a family of random projections (RP) for general applications in machine
learning, data mining, and information retrieval. Among the presented
algorithms, iDP-SignRP is remarkably effective under the setting of
``individual differential privacy'' (iDP), based on sign random projections
(SignRP). Also, DP-SignOPORP considerably improves existing algorithms in the
literature under the standard DP setting, using ``one permutation + one random
projection'' (OPORP), where OPORP is a variant of the celebrated count-sketch
method with fixed-length binning and normalization. Without taking signs, among
the DP-RP family, DP-OPORP achieves the best performance.
Our key idea for improving DP-RP is to take only the signs, i.e., $sign(x_j)
= sign\left(\sum_{i=1}^p u_i w_{ij}\right)$, of the projected data. The
intuition is that the signs often remain unchanged when the original data ($u$)
exhibit small changes (according to the ``neighbor'' definition in DP). In
other words, the aggregation and quantization operations themselves provide
good privacy protections. We develop a technique called ``smooth flipping
probability'' that incorporates this intuitive privacy benefit of SignRPs and
improves the standard DP bit flipping strategy. Based on this technique, we
propose DP-SignOPORP which satisfies strict DP and outperforms other DP
variants based on SignRP (and RP), especially when $\epsilon$ is not very large
(e.g., $\epsilon = 5\sim10$). Moreover, if an application scenario accepts
individual DP, then we immediately obtain an algorithm named iDP-SignRP which
achieves excellent utilities even at small~$\epsilon$ (e.g., $\epsilon<0.5$).
Related papers
- Closed-Form Bounds for DP-SGD against Record-level Inference [18.85865832127335]
We focus on the popular DP-SGD algorithm, and derive simple closed-form bounds.
We obtain bounds for membership inference that match state-of-the-art techniques.
We present a novel data-dependent bound against attribute inference.
arXiv Detail & Related papers (2024-02-22T09:26:16Z) - Private Fine-tuning of Large Language Models with Zeroth-order Optimization [51.19403058739522]
Differentially private gradient descent (DP-SGD) allows models to be trained in a privacy-preserving manner.
We introduce DP-ZO, a private fine-tuning framework for large language models by privatizing zeroth order optimization methods.
arXiv Detail & Related papers (2024-01-09T03:53:59Z) - Normalized/Clipped SGD with Perturbation for Differentially Private
Non-Convex Optimization [94.06564567766475]
DP-SGD and DP-NSGD mitigate the risk of large models memorizing sensitive training data.
We show that these two algorithms achieve similar best accuracy while DP-NSGD is comparatively easier to tune than DP-SGD.
arXiv Detail & Related papers (2022-06-27T03:45:02Z) - DP-FP: Differentially Private Forward Propagation for Large Models [2.062295244789704]
We show how to mitigate the performance drop by replacing the Differential Private Gradient Descent with a novel DP Forward-Propagation (DP-FP)
Our DP-FP achieves an average accuracy of 91.34% with privacy budgets less than 3, representing a 3.81% performance improvement over the state-of-the-art DP-SGD.
arXiv Detail & Related papers (2021-12-29T07:32:29Z) - Optimal Accounting of Differential Privacy via Characteristic Function [25.78065563380023]
We propose a unification of recent advances (Renyi DP, privacy profiles, $f$-DP and the PLD formalism) via the characteristic function ($phi$-function) of a certain worst-case'' privacy loss random variable.
We show that our approach allows natural adaptive composition like Renyi DP, provides exactly tight privacy accounting like PLD, and can be (often losslessly) converted to privacy profile and $f$-DP.
arXiv Detail & Related papers (2021-06-16T06:13:23Z) - Lossless Compression of Efficient Private Local Randomizers [55.657133416044104]
Locally Differentially Private (LDP) Reports are commonly used for collection of statistics and machine learning in the federated setting.
In many cases the best known LDP algorithms require sending prohibitively large messages from the client device to the server.
This has led to significant efforts on reducing the communication cost of LDP algorithms.
arXiv Detail & Related papers (2021-02-24T07:04:30Z) - Proactive DP: A Multple Target Optimization Framework for DP-SGD [22.663029794467768]
We introduce a multiple target optimization framework for DP-SGD referred to as pro-active DP.
The pro-active DP scheme allows one to it a-priori select parameters of DP-SGD based on a fixed privacy budget.
We present a closed-form $(epsilon,delta)$-DP guarantee that connects all parameters in the DP-SGD setup.
arXiv Detail & Related papers (2021-02-17T21:19:39Z) - RL for Latent MDPs: Regret Guarantees and a Lower Bound [74.41782017817808]
We consider the regret problem for reinforcement learning in latent Markov Decision Processes (LMDP)
In an LMDP, an MDP is randomly drawn from a set of $M$ possible MDPs at the beginning of the interaction, but the identity of the chosen MDP is not revealed to the agent.
We show that the key link is a notion of separation between the MDP system dynamics.
arXiv Detail & Related papers (2021-02-09T16:49:58Z) - Private Stochastic Non-Convex Optimization: Adaptive Algorithms and
Tighter Generalization Bounds [72.63031036770425]
We propose differentially private (DP) algorithms for bound non-dimensional optimization.
We demonstrate two popular deep learning methods on the empirical advantages over standard gradient methods.
arXiv Detail & Related papers (2020-06-24T06:01:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.