Locally Differentially Private Gradient Tracking for Distributed Online
Learning over Directed Graphs
- URL: http://arxiv.org/abs/2310.16105v2
- Date: Sun, 29 Oct 2023 17:34:15 GMT
- Title: Locally Differentially Private Gradient Tracking for Distributed Online
Learning over Directed Graphs
- Authors: Ziqin Chen and Yongqiang Wang
- Abstract summary: We propose a locally differentially private gradient tracking based distributed online learning algorithm.
We prove that the proposed algorithm converges in mean square to the exact optimal solution while ensuring rigorous local differential privacy.
- Score: 2.1271873498506038
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Distributed online learning has been proven extremely effective in solving
large-scale machine learning problems over streaming data. However, information
sharing between learners in distributed learning also raises concerns about the
potential leakage of individual learners' sensitive data. To mitigate this
risk, differential privacy, which is widely regarded as the "gold standard" for
privacy protection, has been widely employed in many existing results on
distributed online learning. However, these results often face a fundamental
tradeoff between learning accuracy and privacy. In this paper, we propose a
locally differentially private gradient tracking based distributed online
learning algorithm that successfully circumvents this tradeoff. We prove that
the proposed algorithm converges in mean square to the exact optimal solution
while ensuring rigorous local differential privacy, with the cumulative privacy
budget guaranteed to be finite even when the number of iterations tends to
infinity. The algorithm is applicable even when the communication graph among
learners is directed. To the best of our knowledge, this is the first result
that simultaneously ensures learning accuracy and rigorous local differential
privacy in distributed online learning over directed graphs. We evaluate our
algorithm's performance by using multiple benchmark machine-learning
applications, including logistic regression of the "Mushrooms" dataset and
CNN-based image classification of the "MNIST" and "CIFAR-10" datasets,
respectively. The experimental results confirm that the proposed algorithm
outperforms existing counterparts in both training and testing accuracies.
Related papers
- Independent Distribution Regularization for Private Graph Embedding [55.24441467292359]
Graph embeddings are susceptible to attribute inference attacks, which allow attackers to infer private node attributes from the learned graph embeddings.
To address these concerns, privacy-preserving graph embedding methods have emerged.
We propose a novel approach called Private Variational Graph AutoEncoders (PVGAE) with the aid of independent distribution penalty as a regularization term.
arXiv Detail & Related papers (2023-08-16T13:32:43Z) - Online Distributed Learning with Quantized Finite-Time Coordination [0.4910937238451484]
In our setting a set of agents need to cooperatively train a learning model from streaming data.
We propose a distributed algorithm that relies on a quantized, finite-time coordination protocol.
We analyze the performance of the proposed algorithm in terms of the mean distance from the online solution.
arXiv Detail & Related papers (2023-07-13T08:36:15Z) - Locally Differentially Private Distributed Online Learning with Guaranteed Optimality [1.800614371653704]
This paper proposes an approach that ensures both differential privacy and learning accuracy in distributed online learning.
While ensuring a diminishing expected instantaneous regret, the approach can simultaneously ensure a finite cumulative privacy budget.
To the best of our knowledge, this is the first algorithm that successfully ensures both rigorous local differential privacy and learning accuracy.
arXiv Detail & Related papers (2023-06-25T02:05:34Z) - Preserving Privacy in Federated Learning with Ensemble Cross-Domain
Knowledge Distillation [22.151404603413752]
Federated Learning (FL) is a machine learning paradigm where local nodes collaboratively train a central model.
Existing FL methods typically share model parameters or employ co-distillation to address the issue of unbalanced data distribution.
We develop a privacy preserving and communication efficient method in a FL framework with one-shot offline knowledge distillation.
arXiv Detail & Related papers (2022-09-10T05:20:31Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - A Graph Federated Architecture with Privacy Preserving Learning [48.24121036612076]
Federated learning involves a central processor that works with multiple agents to find a global model.
The current architecture of a server connected to multiple clients is highly sensitive to communication failures and computational overloads at the server.
We use cryptographic and differential privacy concepts to privatize the federated learning algorithm that we extend to the graph structure.
arXiv Detail & Related papers (2021-04-26T09:51:24Z) - Can Active Learning Preemptively Mitigate Fairness Issues? [66.84854430781097]
dataset bias is one of the prevailing causes of unfairness in machine learning.
We study whether models trained with uncertainty-based ALs are fairer in their decisions with respect to a protected class.
We also explore the interaction of algorithmic fairness methods such as gradient reversal (GRAD) and BALD.
arXiv Detail & Related papers (2021-04-14T14:20:22Z) - On Deep Learning with Label Differential Privacy [54.45348348861426]
We study the multi-class classification setting where the labels are considered sensitive and ought to be protected.
We propose a new algorithm for training deep neural networks with label differential privacy, and run evaluations on several datasets.
arXiv Detail & Related papers (2021-02-11T15:09:06Z) - Learning while Respecting Privacy and Robustness to Distributional
Uncertainties and Adversarial Data [66.78671826743884]
The distributionally robust optimization framework is considered for training a parametric model.
The objective is to endow the trained model with robustness against adversarially manipulated input data.
Proposed algorithms offer robustness with little overhead.
arXiv Detail & Related papers (2020-07-07T18:25:25Z) - SPEED: Secure, PrivatE, and Efficient Deep learning [2.283665431721732]
We introduce a deep learning framework able to deal with strong privacy constraints.
Based on collaborative learning, differential privacy and homomorphic encryption, the proposed approach advances state-of-the-art.
arXiv Detail & Related papers (2020-06-16T19:31:52Z) - Private Dataset Generation Using Privacy Preserving Collaborative
Learning [0.0]
This work introduces a privacy preserving FedNN framework for training machine learning models at edge.
The simulation results using MNIST dataset indicates the effectiveness of the framework.
arXiv Detail & Related papers (2020-04-28T15:35:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.