A Graph Federated Architecture with Privacy Preserving Learning
- URL: http://arxiv.org/abs/2104.13215v1
- Date: Mon, 26 Apr 2021 09:51:24 GMT
- Title: A Graph Federated Architecture with Privacy Preserving Learning
- Authors: Elsa Rizk and Ali H. Sayed
- Abstract summary: Federated learning involves a central processor that works with multiple agents to find a global model.
The current architecture of a server connected to multiple clients is highly sensitive to communication failures and computational overloads at the server.
We use cryptographic and differential privacy concepts to privatize the federated learning algorithm that we extend to the graph structure.
- Score: 48.24121036612076
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning involves a central processor that works with multiple
agents to find a global model. The process consists of repeatedly exchanging
estimates, which results in the diffusion of information pertaining to the
local private data. Such a scheme can be inconvenient when dealing with
sensitive data, and therefore, there is a need for the privatization of the
algorithms. Furthermore, the current architecture of a server connected to
multiple clients is highly sensitive to communication failures and
computational overloads at the server. Thus in this work, we develop a private
multi-server federated learning scheme, which we call graph federated learning.
We use cryptographic and differential privacy concepts to privatize the
federated learning algorithm that we extend to the graph structure. We study
the effect of privatization on the performance of the learning algorithm for
general private schemes that can be modeled as additive noise. We show under
convexity and Lipschitz conditions, that the privatized process matches the
performance of the non-private algorithm, even when we increase the noise
variance.
Related papers
- Locally Differentially Private Gradient Tracking for Distributed Online
Learning over Directed Graphs [2.1271873498506038]
We propose a locally differentially private gradient tracking based distributed online learning algorithm.
We prove that the proposed algorithm converges in mean square to the exact optimal solution while ensuring rigorous local differential privacy.
arXiv Detail & Related papers (2023-10-24T18:15:25Z) - Independent Distribution Regularization for Private Graph Embedding [55.24441467292359]
Graph embeddings are susceptible to attribute inference attacks, which allow attackers to infer private node attributes from the learned graph embeddings.
To address these concerns, privacy-preserving graph embedding methods have emerged.
We propose a novel approach called Private Variational Graph AutoEncoders (PVGAE) with the aid of independent distribution penalty as a regularization term.
arXiv Detail & Related papers (2023-08-16T13:32:43Z) - Randomized Quantization is All You Need for Differential Privacy in
Federated Learning [1.9785872350085876]
We consider an approach to federated learning that combines quantization and differential privacy.
We develop a new algorithm called the textbfRandomized textbfQuantization textbfMechanism (RQM)
We empirically study the performance of our algorithm and demonstrate that compared to previous work it yields improved privacy-accuracy trade-offs.
arXiv Detail & Related papers (2023-06-20T21:54:13Z) - Privatized Graph Federated Learning [57.14673504239551]
We introduce graph federated learning, which consists of multiple units connected by a graph.
We show how graph homomorphic perturbations can be used to ensure the algorithm is differentially private.
arXiv Detail & Related papers (2022-03-14T13:48:23Z) - Personalization Improves Privacy-Accuracy Tradeoffs in Federated
Optimization [57.98426940386627]
We show that coordinating local learning with private centralized learning yields a generically useful and improved tradeoff between accuracy and privacy.
We illustrate our theoretical results with experiments on synthetic and real-world datasets.
arXiv Detail & Related papers (2022-02-10T20:44:44Z) - An Expectation-Maximization Perspective on Federated Learning [75.67515842938299]
Federated learning describes the distributed training of models across multiple clients while keeping the data private on-device.
In this work, we view the server-orchestrated federated learning process as a hierarchical latent variable model where the server provides the parameters of a prior distribution over the client-specific model parameters.
We show that with simple Gaussian priors and a hard version of the well known Expectation-Maximization (EM) algorithm, learning in such a model corresponds to FedAvg, the most popular algorithm for the federated learning setting.
arXiv Detail & Related papers (2021-11-19T12:58:59Z) - Understanding Clipping for Federated Learning: Convergence and
Client-Level Differential Privacy [67.4471689755097]
This paper empirically demonstrates that the clipped FedAvg can perform surprisingly well even with substantial data heterogeneity.
We provide the convergence analysis of a differential private (DP) FedAvg algorithm and highlight the relationship between clipping bias and the distribution of the clients' updates.
arXiv Detail & Related papers (2021-06-25T14:47:19Z) - Learning Differentially Private Mechanisms [13.40946759638048]
We propose a technique for automatically learning an accurate and differentially private version of a given non-private program.
We demonstrate that our approach is able to learn foundational algorithms from the differential privacy literature and significantly outperforms natural program synthesis baselines.
arXiv Detail & Related papers (2021-01-04T13:33:57Z) - Differentially Private Secure Multi-Party Computation for Federated
Learning in Financial Applications [5.50791468454604]
Federated learning enables a population of clients, working with a trusted server, to collaboratively learn a shared machine learning model.
This reduces the risk of exposing sensitive data, but it is still possible to reverse engineer information about a client's private data set from communicated model parameters.
We present a privacy-preserving federated learning protocol to a non-specialist audience, demonstrate it using logistic regression on a real-world credit card fraud data set, and evaluate it using an open-source simulation platform.
arXiv Detail & Related papers (2020-10-12T17:16:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.