Distributed Online Private Learning of Convex Nondecomposable Objectives
- URL: http://arxiv.org/abs/2206.07944v4
- Date: Thu, 3 Aug 2023 17:02:15 GMT
- Title: Distributed Online Private Learning of Convex Nondecomposable Objectives
- Authors: Huqiang Cheng, Xiaofeng Liao, and Huaqing Li
- Abstract summary: We deal with a general distributed constrained online learning problem with privacy over time-varying networks.
We propose two algorithms, named as DPSDA-C and DPSDA-PS, under this framework.
Theoretical results show that both algorithms attain an expected regret upper bound in $mathcalO( sqrtT )$ when the objective function is convex.
- Score: 7.5585719185840485
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We deal with a general distributed constrained online learning problem with
privacy over time-varying networks, where a class of nondecomposable objectives
are considered. Under this setting, each node only controls a part of the
global decision, and the goal of all nodes is to collaboratively minimize the
global cost over a time horizon $T$ while guarantees the security of the
transmitted information. For such problems, we first design a novel generic
algorithm framework, named as DPSDA, of differentially private distributed
online learning using the Laplace mechanism and the stochastic variants of dual
averaging method. Note that in the dual updates, all nodes of DPSDA employ the
noise-corrupted gradients for more generality. Then, we propose two algorithms,
named as DPSDA-C and DPSDA-PS, under this framework. In DPSDA-C, the nodes
implement a circulation-based communication in the primal updates so as to
alleviate the disagreements over time-varying undirected networks. In addition,
for the extension to time-varying directed ones, the nodes implement the
broadcast-based push-sum dynamics in DPSDA-PS, which can achieve average
consensus over arbitrary directed networks. Theoretical results show that both
algorithms attain an expected regret upper bound in $\mathcal{O}( \sqrt{T} )$
when the objective function is convex, which matches the best utility
achievable by cutting-edge algorithms. Finally, numerical experiment results on
both synthetic and real-world datasets verify the effectiveness of our
algorithms.
Related papers
- Decentralized Multi-Task Online Convex Optimization Under Random Link
Failures [5.513958040574729]
We develop a robust decentralized saddle-point algorithm against random link failures with heterogeneous probabilities.
We extend our algorithm and analysis to the two-point bandit feedback scenario.
arXiv Detail & Related papers (2024-01-04T00:57:33Z) - Network-GIANT: Fully distributed Newton-type optimization via harmonic
Hessian consensus [2.8617826964327113]
We introduce a Newton-type fully distributed optimization algorithm, Network-GIANT, based on GIANT.
We prove that our algorithm guarantees semi-global and exponential convergence to the exact solution over the network assuming strongly convex and smooth loss functions.
We provide empirical evidence of the superior convergence performance of Network-GIANT over other state-of-art distributed learning algorithms such as Network-DANE and Newton-Raphson Consensus.
arXiv Detail & Related papers (2023-05-13T11:42:40Z) - Interpolation-based Correlation Reduction Network for Semi-Supervised
Graph Learning [49.94816548023729]
We propose a novel graph contrastive learning method, termed Interpolation-based Correlation Reduction Network (ICRN)
In our method, we improve the discriminative capability of the latent feature by enlarging the margin of decision boundaries.
By combining the two settings, we extract rich supervision information from both the abundant unlabeled nodes and the rare yet valuable labeled nodes for discnative representation learning.
arXiv Detail & Related papers (2022-06-06T14:26:34Z) - Decentralized Multi-Task Stochastic Optimization With Compressed
Communications [22.31884634659446]
The paper develops algorithms and obtains performance bounds for two different models of local information availability at the nodes.
We show that deviation from the global minimum value and violations of the constraints are upper-bounded by $mathcalO(T-frac12)$ and $mathcalO(T-frac14)$.
arXiv Detail & Related papers (2021-12-23T05:54:42Z) - Semi-supervised Domain Adaptive Structure Learning [72.01544419893628]
Semi-supervised domain adaptation (SSDA) is a challenging problem requiring methods to overcome both 1) overfitting towards poorly annotated data and 2) distribution shift across domains.
We introduce an adaptive structure learning method to regularize the cooperation of SSL and DA.
arXiv Detail & Related papers (2021-12-12T06:11:16Z) - Learning Autonomy in Management of Wireless Random Networks [102.02142856863563]
This paper presents a machine learning strategy that tackles a distributed optimization task in a wireless network with an arbitrary number of randomly interconnected nodes.
We develop a flexible deep neural network formalism termed distributed message-passing neural network (DMPNN) with forward and backward computations independent of the network topology.
arXiv Detail & Related papers (2021-06-15T09:03:28Z) - Lower Bounds and Optimal Algorithms for Smooth and Strongly Convex
Decentralized Optimization Over Time-Varying Networks [79.16773494166644]
We consider the task of minimizing the sum of smooth and strongly convex functions stored in a decentralized manner across the nodes of a communication network.
We design two optimal algorithms that attain these lower bounds.
We corroborate the theoretical efficiency of these algorithms by performing an experimental comparison with existing state-of-the-art methods.
arXiv Detail & Related papers (2021-06-08T15:54:44Z) - Resource Allocation via Model-Free Deep Learning in Free Space Optical
Communications [119.81868223344173]
The paper investigates the general problem of resource allocation for mitigating channel fading effects in Free Space Optical (FSO) communications.
Under this framework, we propose two algorithms that solve FSO resource allocation problems.
arXiv Detail & Related papers (2020-07-27T17:38:51Z) - Communication-Efficient Distributed Stochastic AUC Maximization with
Deep Neural Networks [50.42141893913188]
We study a distributed variable for large-scale AUC for a neural network as with a deep neural network.
Our model requires a much less number of communication rounds and still a number of communication rounds in theory.
Our experiments on several datasets show the effectiveness of our theory and also confirm our theory.
arXiv Detail & Related papers (2020-05-05T18:08:23Z) - Fully Asynchronous Policy Evaluation in Distributed Reinforcement
Learning over Networks [14.636457985379746]
This paper proposes a emphfully asynchronous scheme for the policy evaluation problem of distributed reinforcement learning (DisRL) over directed peer-to-peer networks.
Without waiting for any other node of the network, each node can locally update its value function at any time by using (possibly delayed) information from its neighbors.
arXiv Detail & Related papers (2020-03-01T08:12:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.