Enhancing Privacy in Decentralized Min-Max Optimization: A Differentially Private Approach
- URL: http://arxiv.org/abs/2508.07505v1
- Date: Sun, 10 Aug 2025 23:24:27 GMT
- Title: Enhancing Privacy in Decentralized Min-Max Optimization: A Differentially Private Approach
- Authors: Yueyang Quan, Chang Wang, Shengjie Zhai, Minghong Fang, Zhuqing Liu,
- Abstract summary: Decentralized min-max optimization allows multi-agent systems to collaboratively solve global min-max optimization problems.<n>Sharing model updates in such systems carry a risk of exposing sensitive inference attacks.<n>We propose a novel algorithm specifically designed for non significant decentralized min-max optimization.
- Score: 4.464594945625068
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Decentralized min-max optimization allows multi-agent systems to collaboratively solve global min-max optimization problems by facilitating the exchange of model updates among neighboring agents, eliminating the need for a central server. However, sharing model updates in such systems carry a risk of exposing sensitive data to inference attacks, raising significant privacy concerns. To mitigate these privacy risks, differential privacy (DP) has become a widely adopted technique for safeguarding individual data. Despite its advantages, implementing DP in decentralized min-max optimization poses challenges, as the added noise can hinder convergence, particularly in non-convex scenarios with complex agent interactions in min-max optimization problems. In this work, we propose an algorithm called DPMixSGD (Differential Private Minmax Hybrid Stochastic Gradient Descent), a novel privacy-preserving algorithm specifically designed for non-convex decentralized min-max optimization. Our method builds on the state-of-the-art STORM-based algorithm, one of the fastest decentralized min-max solutions. We rigorously prove that the noise added to local gradients does not significantly compromise convergence performance, and we provide theoretical bounds to ensure privacy guarantees. To validate our theoretical findings, we conduct extensive experiments across various tasks and models, demonstrating the effectiveness of our approach.
Related papers
- A Secure and Private Distributed Bayesian Federated Learning Design [56.92336577799572]
Distributed Federated Learning (DFL) enables decentralized model training across large-scale systems without a central parameter server.<n>DFL faces three critical challenges: privacy leakage from honest-but-curious neighbors, slow convergence due to the lack of central coordination, and vulnerability to Byzantine adversaries aiming to degrade model accuracy.<n>We propose a novel DFL framework that integrates Byzantine robustness, privacy preservation, and convergence acceleration.
arXiv Detail & Related papers (2026-02-23T16:12:02Z) - Linear-Time User-Level DP-SCO via Robust Statistics [55.350093142673316]
User-level differentially private convex optimization (DP-SCO) has garnered significant attention due to the importance of safeguarding user privacy in machine learning applications.<n>Current methods, such as those based on differentially private gradient descent (DP-SGD), often struggle with high noise accumulation and suboptimal utility.<n>We introduce a novel linear-time algorithm that leverages robust statistics, specifically the median and trimmed mean, to overcome these challenges.
arXiv Detail & Related papers (2025-02-13T02:05:45Z) - Differentially Private Decentralized Optimization with Relay Communication [1.2695958417031445]
We introduce a new measure: Privacy Leakage Frequency (PLF), which reveals the relationship between communication and privacy leakage of algorithms.<n>A novel differentially private decentralized primal--dual algorithm named DP-RECAL is proposed to take advantage of operator splitting method and relay communication mechanism to experience less PLF.
arXiv Detail & Related papers (2022-12-21T09:05:36Z) - Decentralized Nonconvex Optimization with Guaranteed Privacy and
Accuracy [34.24521534464185]
Privacy protection and nonity are two challenging problems in decentralized optimization learning sensitive data.
We propose an algorithm that allows both privacy protection and avoidance.
The algorithm is efficient in both communication and computation.
arXiv Detail & Related papers (2022-12-14T22:36:13Z) - Differentially Private Stochastic Gradient Descent with Low-Noise [49.981789906200035]
Modern machine learning algorithms aim to extract fine-grained information from data to provide accurate predictions, which often conflicts with the goal of privacy protection.
This paper addresses the practical and theoretical importance of developing privacy-preserving machine learning algorithms that ensure good performance while preserving privacy.
arXiv Detail & Related papers (2022-09-09T08:54:13Z) - Bring Your Own Algorithm for Optimal Differentially Private Stochastic
Minimax Optimization [44.52870407321633]
holy grail of these settings is to guarantee the optimal trade-off between the privacy and the excess population loss.
We provide a general framework for solving differentially private minimax optimization (DP-SMO) problems.
Our framework is inspired from the recently proposed Phased-ERM method [20] for nonsmooth differentially private convex optimization (DP-SCO)
arXiv Detail & Related papers (2022-06-01T10:03:20Z) - Decentralized Stochastic Optimization with Inherent Privacy Protection [103.62463469366557]
Decentralized optimization is the basic building block of modern collaborative machine learning, distributed estimation and control, and large-scale sensing.
Since involved data, privacy protection has become an increasingly pressing need in the implementation of decentralized optimization algorithms.
arXiv Detail & Related papers (2022-05-08T14:38:23Z) - Distributed Reinforcement Learning for Privacy-Preserving Dynamic Edge
Caching [91.50631418179331]
A privacy-preserving distributed deep policy gradient (P2D3PG) is proposed to maximize the cache hit rates of devices in the MEC networks.
We convert the distributed optimizations into model-free Markov decision process problems and then introduce a privacy-preserving federated learning method for popularity prediction.
arXiv Detail & Related papers (2021-10-20T02:48:27Z) - A Decentralized Adaptive Momentum Method for Solving a Class of Min-Max
Optimization Problems [9.653157073271021]
We develop a decentralized adaptive momentum (ADAM)-type algorithm for solving min-max optimization problem.
We obtain non-asymptotic rates of convergence of the proposed algorithm for finding a (stochastic) first-order Nash equilibrium point.
arXiv Detail & Related papers (2021-06-10T22:32:01Z) - Modeling the Second Player in Distributionally Robust Optimization [90.25995710696425]
We argue for the use of neural generative models to characterize the worst-case distribution.
This approach poses a number of implementation and optimization challenges.
We find that the proposed approach yields models that are more robust than comparable baselines.
arXiv Detail & Related papers (2021-03-18T14:26:26Z) - Topology-aware Differential Privacy for Decentralized Image
Classification [81.2202290003513]
Top-DP is a novel solution to optimize the differential privacy protection of decentralized image classification systems.
We leverage the unique features of decentralized communication topologies to reduce the noise scale and improve the model usability.
arXiv Detail & Related papers (2020-06-14T06:42:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.