Differentially Private Distributed Convex Optimization
- URL: http://arxiv.org/abs/2302.14514v1
- Date: Tue, 28 Feb 2023 12:07:27 GMT
- Title: Differentially Private Distributed Convex Optimization
- Authors: Minseok Ryu and Kibaek Kim
- Abstract summary: In distributed optimization, multiple agents cooperate to minimize a global objective function, expressed as a sum of local objectives.
Locally stored data are not shared with other agents, which could limit the practical usage of DO in applications with sensitive data.
We propose a privacy-preserving DO algorithm for constrained convex optimization models.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper considers distributed optimization (DO) where multiple agents
cooperate to minimize a global objective function, expressed as a sum of local
objectives, subject to some constraints. In DO, each agent iteratively solves a
local optimization model constructed by its own data and communicates some
information (e.g., a local solution) with its neighbors until a global solution
is obtained. Even though locally stored data are not shared with other agents,
it is still possible to reconstruct the data from the information communicated
among agents, which could limit the practical usage of DO in applications with
sensitive data. To address this issue, we propose a privacy-preserving DO
algorithm for constrained convex optimization models, which provides a
statistical guarantee of data privacy, known as differential privacy, and a
sequence of iterates that converges to an optimal solution in expectation. The
proposed algorithm generalizes a linearized alternating direction method of
multipliers by introducing a multiple local updates technique to reduce
communication costs and incorporating an objective perturbation method in the
local optimization models to compute and communicate randomized feasible local
solutions that cannot be utilized to reconstruct the local data, thus
preserving data privacy. Under the existence of convex constraints, we show
that, while both algorithms provide the same level of data privacy, the
objective perturbation used in the proposed algorithm can provide better
solutions than does the widely adopted output perturbation method that
randomizes the local solutions by adding some noise. We present the details of
privacy and convergence analyses and numerically demonstrate the effectiveness
of the proposed algorithm by applying it in two different applications, namely,
distributed control of power flow and federated learning, where data privacy is
of concern.
Related papers
- New Solutions Based on the Generalized Eigenvalue Problem for the Data Collaboration Analysis [10.58933333850352]
Data Collaboration Analysis (DCA) is noted for its efficiency in terms of computational cost and communication load.
Existing optimization problems for determining the necessary collaborative functions have faced challenges.
This research addresses these issues by formulating the optimization problem through the segmentation of matrices into column vectors.
arXiv Detail & Related papers (2024-04-22T13:26:42Z) - AAA: an Adaptive Mechanism for Locally Differential Private Mean Estimation [42.95927712062214]
Local differential privacy (LDP) is a strong privacy standard that has been adopted by popular software systems.
We propose the advanced adaptive additive (AAA) mechanism, which is a distribution-aware approach that addresses the average utility.
We provide rigorous privacy proofs, utility analyses, and extensive experiments comparing AAA with state-of-the-art mechanisms.
arXiv Detail & Related papers (2024-04-02T04:22:07Z) - Dynamic Privacy Allocation for Locally Differentially Private Federated
Learning with Composite Objectives [10.528569272279999]
This paper proposes a differentially private federated learning algorithm for strongly convex but possibly nonsmooth problems.
The proposed algorithm adds artificial noise to the shared information to ensure privacy and dynamically allocates the time-varying noise variance to minimize an upper bound of the optimization error.
Numerical results show the superiority of the proposed algorithm over state-of-the-art methods.
arXiv Detail & Related papers (2023-08-02T13:30:33Z) - Personalized Graph Federated Learning with Differential Privacy [6.282767337715445]
This paper presents a personalized graph federated learning (PGFL) framework in which distributedly connected servers and their respective edge devices collaboratively learn device or cluster-specific models.
We study a variant of the PGFL implementation that utilizes differential privacy, specifically zero-concentrated differential privacy, where a noise sequence perturbs model exchanges.
Our analysis shows that the algorithm ensures local differential privacy for all clients in terms of zero-concentrated differential privacy.
arXiv Detail & Related papers (2023-06-10T09:52:01Z) - On Differential Privacy for Federated Learning in Wireless Systems with
Multiple Base Stations [90.53293906751747]
We consider a federated learning model in a wireless system with multiple base stations and inter-cell interference.
We show the convergence behavior of the learning process by deriving an upper bound on its optimality gap.
Our proposed scheduler improves the average accuracy of the predictions compared with a random scheduler.
arXiv Detail & Related papers (2022-08-25T03:37:11Z) - Decentralized Stochastic Optimization with Inherent Privacy Protection [103.62463469366557]
Decentralized optimization is the basic building block of modern collaborative machine learning, distributed estimation and control, and large-scale sensing.
Since involved data, privacy protection has become an increasingly pressing need in the implementation of decentralized optimization algorithms.
arXiv Detail & Related papers (2022-05-08T14:38:23Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - Sample-based and Feature-based Federated Learning via Mini-batch SSCA [18.11773963976481]
This paper investigates sample-based and feature-based federated optimization.
We show that the proposed algorithms can preserve data privacy through the model aggregation mechanism.
We also show that the proposed algorithms converge to Karush-Kuhn-Tucker points of the respective federated optimization problems.
arXiv Detail & Related papers (2021-04-13T08:23:46Z) - Graph-Homomorphic Perturbations for Private Decentralized Learning [64.26238893241322]
Local exchange of estimates allows inference of data based on private data.
perturbations chosen independently at every agent, resulting in a significant performance loss.
We propose an alternative scheme, which constructs perturbations according to a particular nullspace condition, allowing them to be invisible.
arXiv Detail & Related papers (2020-10-23T10:35:35Z) - Learning while Respecting Privacy and Robustness to Distributional
Uncertainties and Adversarial Data [66.78671826743884]
The distributionally robust optimization framework is considered for training a parametric model.
The objective is to endow the trained model with robustness against adversarially manipulated input data.
Proposed algorithms offer robustness with little overhead.
arXiv Detail & Related papers (2020-07-07T18:25:25Z) - FedPD: A Federated Learning Framework with Optimal Rates and Adaptivity
to Non-IID Data [59.50904660420082]
Federated Learning (FL) has become a popular paradigm for learning from distributed data.
To effectively utilize data at different devices without moving them to the cloud, algorithms such as the Federated Averaging (FedAvg) have adopted a "computation then aggregation" (CTA) model.
arXiv Detail & Related papers (2020-05-22T23:07:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.