Privacy Amplification in Differentially Private Zeroth-Order   Optimization with Hidden States
        - URL: http://arxiv.org/abs/2506.00158v1
 - Date: Fri, 30 May 2025 18:55:32 GMT
 - Title: Privacy Amplification in Differentially Private Zeroth-Order   Optimization with Hidden States
 - Authors: Eli Chien, Wei-Ning Chen, Pan Li, 
 - Abstract summary: We show that convergent privacy bounds can be established for zeroth-order optimization.<n>Our analysis generalizes the celebrated privacy amplification-by-iteration framework to the setting of smooth loss functions.<n>It induces better DP zeroth-order algorithmic designs previously unknown to the literature.
 - Score: 23.033229440303355
 - License: http://creativecommons.org/licenses/by/4.0/
 - Abstract:   Zeroth-order optimization has emerged as a promising approach for fine-tuning large language models on domain-specific data, particularly under differential privacy (DP) and memory constraints. While first-order methods have been extensively studied from a privacy perspective, the privacy analysis and algorithmic design for zeroth-order methods remain significantly underexplored. A critical open question concerns hidden-state DP analysis: although convergent privacy bounds are known for first-order methods, it has remained unclear whether similar guarantees can be established for zeroth-order methods. In this work, we provide an affirmative answer by proving a convergent DP bound for zeroth-order optimization. Our analysis generalizes the celebrated privacy amplification-by-iteration framework to the setting of smooth loss functions in zeroth-order optimization. Furthermore, it induces better DP zeroth-order algorithmic designs that are previously unknown to the literature. 
 
       
      
        Related papers
        - On the Inherent Privacy of Zeroth Order Projected Gradient Descent [16.381524087701]
We show that there exist strongly convex objective functions such that running (Projected) Zeroth-Order Gradient Descent (ZO-GD) is not differentially private.<n>We also show that even with random iterations, the privacy loss in ZO-GD can grow superlinearly with the number of iterations.
arXiv  Detail & Related papers  (2025-07-08T02:38:14Z) - Differentially Private Random Feature Model [52.468511541184895]
We produce a differentially private random feature model for privacy-preserving kernel machines.<n>We show that our method preserves privacy and derive a generalization error bound for the method.
arXiv  Detail & Related papers  (2024-12-06T05:31:08Z) - Individualized Privacy Accounting via Subsampling with Applications in   Combinatorial Optimization [55.81991984375959]
In this work, we give a new technique for analyzing individualized privacy accounting via the following simple observation.
We obtain several improved algorithms for private optimization problems, including decomposable submodular and set algorithm cover.
arXiv  Detail & Related papers  (2024-05-28T19:02:30Z) - Provable Privacy with Non-Private Pre-Processing [56.770023668379615]
We propose a general framework to evaluate the additional privacy cost incurred by non-private data-dependent pre-processing algorithms.
Our framework establishes upper bounds on the overall privacy guarantees by utilising two new technical notions.
arXiv  Detail & Related papers  (2024-03-19T17:54:49Z) - Shifted Interpolation for Differential Privacy [6.1836947007564085]
Noisy gradient descent and its variants are the predominant algorithms for differentially private machine learning.
This paper establishes the "privacy amplification by corollary" phenomenon in the unifying framework of $f$-differential privacy.
 Notably, this leads to the first exact privacy analysis in the foundational setting of strongly convex optimization.
arXiv  Detail & Related papers  (2024-03-01T04:50:04Z) - Private Fine-tuning of Large Language Models with Zeroth-order   Optimization [51.19403058739522]
Differentially private gradient descent (DP-SGD) allows models to be trained in a privacy-preserving manner.<n>We introduce DP-ZO, a private fine-tuning framework for large language models by privatizing zeroth order optimization methods.
arXiv  Detail & Related papers  (2024-01-09T03:53:59Z) - DPZero: Private Fine-Tuning of Language Models without Backpropagation [49.365749361283704]
We introduce DPZero, a novel private zeroth-order algorithm with nearly dimension-independent rates.
The memory efficiency of DPZero is demonstrated in privately fine-tuning RoBERTa and OPT on several downstream tasks.
arXiv  Detail & Related papers  (2023-10-14T18:42:56Z) - Dynamic Privacy Allocation for Locally Differentially Private Federated
  Learning with Composite Objectives [10.528569272279999]
This paper proposes a differentially private federated learning algorithm for strongly convex but possibly nonsmooth problems.
The proposed algorithm adds artificial noise to the shared information to ensure privacy and dynamically allocates the time-varying noise variance to minimize an upper bound of the optimization error.
 Numerical results show the superiority of the proposed algorithm over state-of-the-art methods.
arXiv  Detail & Related papers  (2023-08-02T13:30:33Z) - Differentially Private Stochastic Gradient Descent with Low-Noise [49.981789906200035]
Modern machine learning algorithms aim to extract fine-grained information from data to provide accurate predictions, which often conflicts with the goal of privacy protection.
This paper addresses the practical and theoretical importance of developing privacy-preserving machine learning algorithms that ensure good performance while preserving privacy.
arXiv  Detail & Related papers  (2022-09-09T08:54:13Z) - Private Alternating Least Squares: Practical Private Matrix Completion
  with Tighter Rates [34.023599653814415]
We study the problem of differentially private (DP) matrix completion under user-level privacy.
We design a joint differentially private variant of the popular Alternating-Least-Squares (ALS) method.
arXiv  Detail & Related papers  (2021-07-20T23:19:11Z) - No-Regret Algorithms for Private Gaussian Process Bandit Optimization [13.660643701487002]
We consider the ubiquitous problem of gaussian process (GP) bandit optimization from the lens of privacy-preserving statistics.
We propose a solution for differentially private GP bandit optimization that combines a uniform kernel approximator with random perturbations.
Our algorithms maintain differential privacy throughout the optimization procedure and critically do not rely explicitly on the sample path for prediction.
arXiv  Detail & Related papers  (2021-02-24T18:52:24Z) - Hiding Among the Clones: A Simple and Nearly Optimal Analysis of Privacy
  Amplification by Shuffling [49.43288037509783]
We show that random shuffling amplifies differential privacy guarantees of locally randomized data.
Our result is based on a new approach that is simpler than previous work and extends to approximate differential privacy with nearly the same guarantees.
arXiv  Detail & Related papers  (2020-12-23T17:07:26Z) 
        This list is automatically generated from the titles and abstracts of the papers in this site.
       
     
           This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.