Federated Learning via Plurality Vote
- URL: http://arxiv.org/abs/2110.02998v1
- Date: Wed, 6 Oct 2021 18:16:22 GMT
- Title: Federated Learning via Plurality Vote
- Authors: Kai Yue, Richeng Jin, Chau-Wai Wong, Huaiyu Dai
- Abstract summary: Federated learning allows collaborative workers to solve a machine learning problem while preserving data privacy.
Recent studies have tackled various challenges in federated learning.
We propose a new scheme named federated learning via plurality vote (FedVote)
- Score: 38.778944321534084
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning allows collaborative workers to solve a machine learning
problem while preserving data privacy. Recent studies have tackled various
challenges in federated learning, but the joint optimization of communication
overhead, learning reliability, and deployment efficiency is still an open
problem. To this end, we propose a new scheme named federated learning via
plurality vote (FedVote). In each communication round of FedVote, workers
transmit binary or ternary weights to the server with low communication
overhead. The model parameters are aggregated via weighted voting to enhance
the resilience against Byzantine attacks. When deployed for inference, the
model with binary or ternary weights is resource-friendly to edge devices. We
show that our proposed method can reduce quantization error and converges
faster compared with the methods directly quantizing the model updates.
Related papers
- Communication Efficient ConFederated Learning: An Event-Triggered SAGA
Approach [67.27031215756121]
Federated learning (FL) is a machine learning paradigm that targets model training without gathering the local data over various data sources.
Standard FL, which employs a single server, can only support a limited number of users, leading to degraded learning capability.
In this work, we consider a multi-server FL framework, referred to as emphConfederated Learning (CFL) in order to accommodate a larger number of users.
arXiv Detail & Related papers (2024-02-28T03:27:10Z) - Federated Unlearning via Active Forgetting [24.060724751342047]
We propose a novel federated unlearning framework based on incremental learning.
Our framework differs from existing federated unlearning methods that rely on approximate retraining or data influence estimation.
arXiv Detail & Related papers (2023-07-07T03:07:26Z) - Federated Learning of Neural ODE Models with Different Iteration Counts [0.9444784653236158]
Federated learning is a distributed machine learning approach in which clients train models locally with their own data and upload them to a server so that their trained results are shared between them without uploading raw data to the server.
In this paper, we utilize Neural ODE based models for federated learning.
We show that our approach can reduce communication size by up to 92.4% compared with a baseline ResNet model using CIFAR-10 dataset.
arXiv Detail & Related papers (2022-08-19T17:57:32Z) - FedNew: A Communication-Efficient and Privacy-Preserving Newton-Type
Method for Federated Learning [75.46959684676371]
We introduce a novel framework called FedNew in which there is no need to transmit Hessian information from clients to PS.
FedNew hides the gradient information and results in a privacy-preserving approach compared to the existing state-of-the-art.
arXiv Detail & Related papers (2022-06-17T15:21:39Z) - Federated Two-stage Learning with Sign-based Voting [45.2715985913761]
Federated learning is a distributed machine learning mechanism where local devices collaboratively train a shared global model.
Recent larger and deeper machine learning models also pose more difficulties in deploying them in a federated environment.
In this paper, we design a two-stage learning framework that augments prototypical federated learning with a cut layer on devices.
arXiv Detail & Related papers (2021-12-10T17:31:23Z) - An Expectation-Maximization Perspective on Federated Learning [75.67515842938299]
Federated learning describes the distributed training of models across multiple clients while keeping the data private on-device.
In this work, we view the server-orchestrated federated learning process as a hierarchical latent variable model where the server provides the parameters of a prior distribution over the client-specific model parameters.
We show that with simple Gaussian priors and a hard version of the well known Expectation-Maximization (EM) algorithm, learning in such a model corresponds to FedAvg, the most popular algorithm for the federated learning setting.
arXiv Detail & Related papers (2021-11-19T12:58:59Z) - Secure Distributed Training at Scale [65.7538150168154]
Training in presence of peers requires specialized distributed training algorithms with Byzantine tolerance.
We propose a novel protocol for secure (Byzantine-tolerant) decentralized training that emphasizes communication efficiency.
arXiv Detail & Related papers (2021-06-21T17:00:42Z) - Adaptive Federated Dropout: Improving Communication Efficiency and
Generalization for Federated Learning [6.982736900950362]
A revolutionary decentralized machine learning setting, known as Federated Learning, enables multiple clients located at different geographical locations to collaboratively learn a machine learning model.
Communication between the clients and the server is considered a main bottleneck in the convergence time of federated learning.
We propose and study Adaptive Federated Dropout (AFD), a novel technique to reduce the communication costs associated with federated learning.
arXiv Detail & Related papers (2020-11-08T18:41:44Z) - WAFFLe: Weight Anonymized Factorization for Federated Learning [88.44939168851721]
In domains where data are sensitive or private, there is great value in methods that can learn in a distributed manner without the data ever leaving the local devices.
We propose Weight Anonymized Factorization for Federated Learning (WAFFLe), an approach that combines the Indian Buffet Process with a shared dictionary of weight factors for neural networks.
arXiv Detail & Related papers (2020-08-13T04:26:31Z) - Ternary Compression for Communication-Efficient Federated Learning [17.97683428517896]
Federated learning provides a potential solution to privacy-preserving and secure machine learning.
We propose a ternary federated averaging protocol (T-FedAvg) to reduce the upstream and downstream communication of federated learning systems.
Our results show that the proposed T-FedAvg is effective in reducing communication costs and can even achieve slightly better performance on non-IID data.
arXiv Detail & Related papers (2020-03-07T11:55:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.