RDP: Ranked Differential Privacy for Facial Feature Protection in Multiscale Sparsified Subspace
- URL: http://arxiv.org/abs/2408.00294v1
- Date: Thu, 1 Aug 2024 05:41:59 GMT
- Title: RDP: Ranked Differential Privacy for Facial Feature Protection in Multiscale Sparsified Subspace
- Authors: Lu Ou, Shaolin Liao, Shihui Gao, Guandong Huang, Zheng Qi,
- Abstract summary: Face recognition systems face real threat of being breached by potential adversaries who are able to access users' face images and use them to intrude the face recognition systems.
We propose a novel privacy protection method in the multiscale sparsified feature subspaces to protect sensitive facial features, namedRanked Differential Privacy (RDP)
Two methods are proposed to solve the nonlinear LM problem and obtain the optimal noise scale parameters.
- Score: 1.6163129903911515
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the widespread sharing of personal face images in applications' public databases, face recognition systems faces real threat of being breached by potential adversaries who are able to access users' face images and use them to intrude the face recognition systems. In this paper, we propose a novel privacy protection method in the multiscale sparsified feature subspaces to protect sensitive facial features, by taking care of the influence or weight ranked feature coefficients on the privacy budget, named "Ranked Differential Privacy (RDP)". After the multiscale feature decomposition, the lightweight Laplacian noise is added to the dimension-reduced sparsified feature coefficients according to the geometric superposition method. Then, we rigorously prove that the RDP satisfies Differential Privacy. After that, the nonlinear Lagrange Multiplier (LM) method is formulated for the constraint optimization problem of maximizing the utility of the visualization quality protected face images with sanitizing noise, under a given facial features privacy budget. Then, two methods are proposed to solve the nonlinear LM problem and obtain the optimal noise scale parameters: 1) the analytical Normalization Approximation (NA) method with identical average noise scale parameter for real-time online applications; and 2) the LM optimization Gradient Descent (LMGD) numerical method to obtain the nonlinear solution through iterative updating for more accurate offline applications. Experimental results on two real-world datasets show that our proposed RDP outperforms other state-of-the-art methods: at a privacy budget of 0.2, the PSNR (Peak Signal-to-Noise Ratio) of the RDP is about ~10 dB higher than (10 times as high as) the highest PSNR of all compared methods.
Related papers
- Dual-Priv Pruning : Efficient Differential Private Fine-Tuning in Multimodal Large Language Models [21.598534853947676]
We propose a framework that employs two complementary pruning mechanisms for Differential Privacy (DP) fine-tuning in MLLMs.<n>Our approach consistently utilizes less memory than standard DP-SGD.<n>To the best of our knowledge, we are the first to explore DP fine-tuning in MLLMs.
arXiv Detail & Related papers (2025-06-08T10:33:01Z) - Linear-Time User-Level DP-SCO via Robust Statistics [55.350093142673316]
User-level differentially private convex optimization (DP-SCO) has garnered significant attention due to the importance of safeguarding user privacy in machine learning applications.
Current methods, such as those based on differentially private gradient descent (DP-SGD), often struggle with high noise accumulation and suboptimal utility.
We introduce a novel linear-time algorithm that leverages robust statistics, specifically the median and trimmed mean, to overcome these challenges.
arXiv Detail & Related papers (2025-02-13T02:05:45Z) - Rethinking Improved Privacy-Utility Trade-off with Pre-existing Knowledge for DP Training [31.559864332056648]
We propose a generic differential privacy framework with heterogeneous noise (DP-Hero)
Atop DP-Hero, we instantiate a heterogeneous version of DP-SGD, where the noise injected into gradient updates is heterogeneous and guided by prior-established model parameters.
We conduct comprehensive experiments to verify and explain the effectiveness of the proposed DP-Hero, showing improved training accuracy compared with state-of-the-art works.
arXiv Detail & Related papers (2024-09-05T08:40:54Z) - Differentially Private Fine-Tuning of Diffusion Models [22.454127503937883]
The integration of Differential Privacy with diffusion models (DMs) presents a promising yet challenging frontier.
Recent developments in this field have highlighted the potential for generating high-quality synthetic data by pre-training on public data.
We propose a strategy optimized for private diffusion models, which minimizes the number of trainable parameters to enhance the privacy-utility trade-off.
arXiv Detail & Related papers (2024-06-03T14:18:04Z) - Improved Communication-Privacy Trade-offs in $L_2$ Mean Estimation under Streaming Differential Privacy [47.997934291881414]
Existing mean estimation schemes are usually optimized for $L_infty$ geometry and rely on random rotation or Kashin's representation to adapt to $L$ geometry.
We introduce a novel privacy accounting method for the sparsified Gaussian mechanism that incorporates the randomness inherent in sparsification into the DP.
Unlike previous approaches, our accounting algorithm directly operates in $L$ geometry, yielding MSEs that fast converge to those of the Gaussian mechanism.
arXiv Detail & Related papers (2024-05-02T03:48:47Z) - Geometry of Sensitivity: Twice Sampling and Hybrid Clipping in Differential Privacy with Optimal Gaussian Noise and Application to Deep Learning [18.92302645198466]
We study the problem of the construction of optimal randomization in Differential Privacy.
Finding the minimal perturbation for properly-selected sensitivity sets is a central problem in DP research.
arXiv Detail & Related papers (2023-09-06T02:45:08Z) - Theoretically Principled Federated Learning for Balancing Privacy and
Utility [61.03993520243198]
We propose a general learning framework for the protection mechanisms that protects privacy via distorting model parameters.
It can achieve personalized utility-privacy trade-off for each model parameter, on each client, at each communication round in federated learning.
arXiv Detail & Related papers (2023-05-24T13:44:02Z) - Towards the Flatter Landscape and Better Generalization in Federated
Learning under Client-level Differential Privacy [67.33715954653098]
We propose a novel DPFL algorithm named DP-FedSAM, which leverages gradient perturbation to mitigate the negative impact of DP.
Specifically, DP-FedSAM integrates Sharpness Aware of Minimization (SAM) to generate local flatness models with stability and weight robustness.
To further reduce the magnitude random noise while achieving better performance, we propose DP-FedSAM-$top_k$ by adopting the local update sparsification technique.
arXiv Detail & Related papers (2023-05-01T15:19:09Z) - On Private Online Convex Optimization: Optimal Algorithms in
$\ell_p$-Geometry and High Dimensional Contextual Bandits [9.798304879986604]
We study the Differentially private (DP) convex optimization problem with streaming data sampled from a distribution and arrives sequentially.
We also consider the continual release model where parameters related to private information are updated and released upon each new data, often known as the online algorithms.
Our algorithm achieves in linear time the optimal excess risk when $1pleq 2$ and the state-of-the-art excess risk meeting the non-private lower ones when $2pleqinfty$.
arXiv Detail & Related papers (2022-06-16T12:09:47Z) - Direct Dense Pose Estimation [138.56533828316833]
Dense human pose estimation is the problem of learning dense correspondences between RGB images and the surfaces of human bodies.
Prior dense pose estimation methods are all based on Mask R-CNN framework and operate in a top-down manner of first attempting to identify a bounding box for each person.
We propose a novel alternative method for solving the dense pose estimation problem, called Direct Dense Pose (DDP)
arXiv Detail & Related papers (2022-04-04T06:14:38Z) - A Differentially Private Framework for Deep Learning with Convexified
Loss Functions [4.059849656394191]
Differential privacy (DP) has been applied in deep learning for preserving privacy of the underlying training sets.
Existing DP practice falls into three categories - objective perturbation, gradient perturbation and output perturbation.
We propose a novel output perturbation framework by injecting DP noise into a randomly sampled neuron.
arXiv Detail & Related papers (2022-04-03T11:10:05Z) - Adaptive Differentially Private Empirical Risk Minimization [95.04948014513226]
We propose an adaptive (stochastic) gradient perturbation method for differentially private empirical risk minimization.
We prove that the ADP method considerably improves the utility guarantee compared to the standard differentially private method in which vanilla random noise is added.
arXiv Detail & Related papers (2021-10-14T15:02:20Z) - Smoothed Differential Privacy [55.415581832037084]
Differential privacy (DP) is a widely-accepted and widely-applied notion of privacy based on worst-case analysis.
In this paper, we propose a natural extension of DP following the worst average-case idea behind the celebrated smoothed analysis.
We prove that any discrete mechanism with sampling procedures is more private than what DP predicts, while many continuous mechanisms with sampling procedures are still non-private under smoothed DP.
arXiv Detail & Related papers (2021-07-04T06:55:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.