Two-Bit Aggregation for Communication Efficient and Differentially
Private Federated Learning
- URL: http://arxiv.org/abs/2110.03017v1
- Date: Wed, 6 Oct 2021 19:03:58 GMT
- Title: Two-Bit Aggregation for Communication Efficient and Differentially
Private Federated Learning
- Authors: Mohammad Aghapour and Aidin Ferdowsi and Walid Saad
- Abstract summary: In federated learning (FL), a machine learning model is trained on multiple nodes in a decentralized manner, while keeping the data local and not shared with other nodes.
The information sent from the nodes to the server may reveal some details about each node's local data, thus raising privacy concerns.
A novel two-bit aggregation algorithm is proposed with guaranteed differential privacy and reduced uplink communication overhead.
- Score: 79.66767935077925
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In federated learning (FL), a machine learning model is trained on multiple
nodes in a decentralized manner, while keeping the data local and not shared
with other nodes. However, FL requires the nodes to also send information on
the model parameters to a central server for aggregation. However, the
information sent from the nodes to the server may reveal some details about
each node's local data, thus raising privacy concerns. Furthermore, the
repetitive uplink transmission from the nodes to the server may result in a
communication overhead and network congestion. To address these two challenges,
in this paper, a novel two-bit aggregation algorithm is proposed with
guaranteed differential privacy and reduced uplink communication overhead.
Extensive experiments demonstrate that the proposed aggregation algorithm can
achieve the same performance as state-of-the-art approaches on datasets such as
MNIST, Fashion MNIST, CIFAR-10, and CIFAR-100, while ensuring differential
privacy and improving communication efficiency.
Related papers
- Privacy Preserving Semi-Decentralized Mean Estimation over Intermittently-Connected Networks [59.43433767253956]
We consider the problem of privately estimating the mean of vectors distributed across different nodes of an unreliable wireless network.
In a semi-decentralized setup, nodes can collaborate with their neighbors to compute a local consensus, which they relay to a central server.
We study the tradeoff between collaborative relaying and privacy leakage due to the data sharing among nodes.
arXiv Detail & Related papers (2024-06-06T06:12:15Z) - Communication-Efficient Decentralized Federated Learning via One-Bit
Compressive Sensing [52.402550431781805]
Decentralized federated learning (DFL) has gained popularity due to its practicality across various applications.
Compared to the centralized version, training a shared model among a large number of nodes in DFL is more challenging.
We develop a novel algorithm based on the framework of the inexact alternating direction method (iADM)
arXiv Detail & Related papers (2023-08-31T12:22:40Z) - Distributed Learning over Networks with Graph-Attention-Based
Personalization [49.90052709285814]
We propose a graph-based personalized algorithm (GATTA) for distributed deep learning.
In particular, the personalized model in each agent is composed of a global part and a node-specific part.
By treating each agent as one node in a graph the node-specific parameters as its features, the benefits of the graph attention mechanism can be inherited.
arXiv Detail & Related papers (2023-05-22T13:48:30Z) - Collaborative Mean Estimation over Intermittently Connected Networks
with Peer-To-Peer Privacy [86.61829236732744]
This work considers the problem of Distributed Mean Estimation (DME) over networks with intermittent connectivity.
The goal is to learn a global statistic over the data samples localized across distributed nodes with the help of a central server.
We study the tradeoff between collaborative relaying and privacy leakage due to the additional data sharing among nodes.
arXiv Detail & Related papers (2023-02-28T19:17:03Z) - Communication-Efficient Federated Learning With Data and Client
Heterogeneity [22.432529149142976]
Federated Learning (FL) enables large-scale distributed training of machine learning models.
executing FL at scale comes with inherent practical challenges.
We present the first variant of the classic federated averaging (FedAvg) algorithm.
arXiv Detail & Related papers (2022-06-20T22:39:39Z) - DisPFL: Towards Communication-Efficient Personalized Federated Learning
via Decentralized Sparse Training [84.81043932706375]
We propose a novel personalized federated learning framework in a decentralized (peer-to-peer) communication protocol named Dis-PFL.
Dis-PFL employs personalized sparse masks to customize sparse local models on the edge.
We demonstrate that our method can easily adapt to heterogeneous local clients with varying computation complexities.
arXiv Detail & Related papers (2022-06-01T02:20:57Z) - Variational Co-embedding Learning for Attributed Network Clustering [30.7006907516984]
Recent works for attributed network clustering utilize graph convolution to obtain node embeddings and simultaneously perform clustering assignments on the embedding space.
We propose a variational co-embedding learning model for attributed network clustering (ANC)
ANC is composed of dual variational auto-encoders to simultaneously embed nodes and attributes.
arXiv Detail & Related papers (2021-04-15T08:11:47Z) - Consensus Driven Learning [0.0]
We propose a new method of distributed, decentralized learning that allows a network of nodes to coordinate their training using asynchronous updates over an unreliable network.
This is achieved by taking inspiration from Distributed Averaging Consensus algorithms to coordinate the various nodes.
We show that our coordination method allows models to be learned on highly biased datasets, and in the presence of intermittent communication failure.
arXiv Detail & Related papers (2020-05-20T18:24:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.