Differentially Private ADMM for Convex Distributed Learning: Improved
Accuracy via Multi-Step Approximation
- URL: http://arxiv.org/abs/2005.07890v1
- Date: Sat, 16 May 2020 07:17:31 GMT
- Title: Differentially Private ADMM for Convex Distributed Learning: Improved
Accuracy via Multi-Step Approximation
- Authors: Zonghao Huang and Yanmin Gong
- Abstract summary: Alternating Direction Method of Multipliers (ADMM) is a popular computation for distributed learning.
When the training data is sensitive, the exchanged iterates will cause serious privacy concern.
We propose a new differentially private distributed ADMM with improved accuracy for a wide range of convex learning problems.
- Score: 10.742065340992525
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Alternating Direction Method of Multipliers (ADMM) is a popular algorithm for
distributed learning, where a network of nodes collaboratively solve a
regularized empirical risk minimization by iterative local computation
associated with distributed data and iterate exchanges. When the training data
is sensitive, the exchanged iterates will cause serious privacy concern. In
this paper, we aim to propose a new differentially private distributed ADMM
algorithm with improved accuracy for a wide range of convex learning problems.
In our proposed algorithm, we adopt the approximation of the objective function
in the local computation to introduce calibrated noise into iterate updates
robustly, and allow multiple primal variable updates per node in each
iteration. Our theoretical results demonstrate that our approach can obtain
higher utility by such multiple approximate updates, and achieve the error
bounds asymptotic to the state-of-art ones for differentially private empirical
risk minimization.
Related papers
- Optimizing the Optimal Weighted Average: Efficient Distributed Sparse Classification [50.406127962933915]
ACOWA allows an extra round of communication to achieve noticeably better approximation quality with minor runtime increases.
Results show that ACOWA obtains solutions that are more faithful to the empirical risk minimizer and attain substantially higher accuracy than other distributed algorithms.
arXiv Detail & Related papers (2024-06-03T19:43:06Z) - Collaborative Heterogeneous Causal Inference Beyond Meta-analysis [68.4474531911361]
We propose a collaborative inverse propensity score estimator for causal inference with heterogeneous data.
Our method shows significant improvements over the methods based on meta-analysis when heterogeneity increases.
arXiv Detail & Related papers (2024-04-24T09:04:36Z) - Compound Batch Normalization for Long-tailed Image Classification [77.42829178064807]
We propose a compound batch normalization method based on a Gaussian mixture.
It can model the feature space more comprehensively and reduce the dominance of head classes.
The proposed method outperforms existing methods on long-tailed image classification.
arXiv Detail & Related papers (2022-12-02T07:31:39Z) - Recursive Inference for Variational Autoencoders [34.552283758419506]
Inference networks of traditional Variational Autoencoders (VAEs) are typically amortized.
Recent semi-amortized approaches were proposed to address this drawback.
We introduce an accurate amortized inference algorithm.
arXiv Detail & Related papers (2020-11-17T10:22:12Z) - Coded Stochastic ADMM for Decentralized Consensus Optimization with Edge
Computing [113.52575069030192]
Big data, including applications with high security requirements, are often collected and stored on multiple heterogeneous devices, such as mobile devices, drones and vehicles.
Due to the limitations of communication costs and security requirements, it is of paramount importance to extract information in a decentralized manner instead of aggregating data to a fusion center.
We consider the problem of learning model parameters in a multi-agent system with data locally processed via distributed edge nodes.
A class of mini-batch alternating direction method of multipliers (ADMM) algorithms is explored to develop the distributed learning model.
arXiv Detail & Related papers (2020-10-02T10:41:59Z) - Distributed Optimization, Averaging via ADMM, and Network Topology [0.0]
We study the connection between network topology and convergence rates for different algorithms on a real world problem of sensor localization.
We also show interesting connections between ADMM and lifted Markov chains besides providing an explicitly characterization of its convergence.
arXiv Detail & Related papers (2020-09-05T21:44:39Z) - Towards Plausible Differentially Private ADMM Based Distributed Machine
Learning [27.730535587906168]
We propose a novel (Improved) Plausible differentially Private ADMM algorithm, called PP-ADMM and IPP-ADMM.
Under the same privacy guarantee, the proposed algorithms are superior to the state of the art in terms of model accuracy and convergence rate.
arXiv Detail & Related papers (2020-08-11T03:40:55Z) - Jointly Optimizing Dataset Size and Local Updates in Heterogeneous
Mobile Edge Learning [11.191719032853527]
This paper proposes to maximize the accuracy of a distributed machine learning (ML) model trained on learners connected via the resource-constrained wireless edge.
We jointly optimize the number of local/global updates and the task size allocation to minimize the loss while taking into account heterogeneous communication and computation capabilities of each learner.
arXiv Detail & Related papers (2020-06-12T18:19:20Z) - Beyond the Mean-Field: Structured Deep Gaussian Processes Improve the
Predictive Uncertainties [12.068153197381575]
We propose a novel variational family that allows for retaining covariances between latent processes while achieving fast convergence.
We provide an efficient implementation of our new approach and apply it to several benchmark datasets.
It yields excellent results and strikes a better balance between accuracy and calibrated uncertainty estimates than its state-of-the-art alternatives.
arXiv Detail & Related papers (2020-05-22T11:10:59Z) - Communication-Efficient Distributed Stochastic AUC Maximization with
Deep Neural Networks [50.42141893913188]
We study a distributed variable for large-scale AUC for a neural network as with a deep neural network.
Our model requires a much less number of communication rounds and still a number of communication rounds in theory.
Our experiments on several datasets show the effectiveness of our theory and also confirm our theory.
arXiv Detail & Related papers (2020-05-05T18:08:23Z) - Channel Assignment in Uplink Wireless Communication using Machine
Learning Approach [54.012791474906514]
This letter investigates a channel assignment problem in uplink wireless communication systems.
Our goal is to maximize the sum rate of all users subject to integer channel assignment constraints.
Due to high computational complexity, machine learning approaches are employed to obtain computational efficient solutions.
arXiv Detail & Related papers (2020-01-12T15:54:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.