Federated Two-stage Learning with Sign-based Voting
- URL: http://arxiv.org/abs/2112.05687v1
- Date: Fri, 10 Dec 2021 17:31:23 GMT
- Title: Federated Two-stage Learning with Sign-based Voting
- Authors: Zichen Ma, Zihan Lu, Yu Lu, Wenye Li, Jinfeng Yi, Shuguang Cui
- Abstract summary: Federated learning is a distributed machine learning mechanism where local devices collaboratively train a shared global model.
Recent larger and deeper machine learning models also pose more difficulties in deploying them in a federated environment.
In this paper, we design a two-stage learning framework that augments prototypical federated learning with a cut layer on devices.
- Score: 45.2715985913761
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning is a distributed machine learning mechanism where local
devices collaboratively train a shared global model under the orchestration of
a central server, while keeping all private data decentralized. In the system,
model parameters and its updates are transmitted instead of raw data, and thus
the communication bottleneck has become a key challenge. Besides, recent larger
and deeper machine learning models also pose more difficulties in deploying
them in a federated environment. In this paper, we design a federated two-stage
learning framework that augments prototypical federated learning with a cut
layer on devices and uses sign-based stochastic gradient descent with the
majority vote method on model updates. Cut layer on devices learns informative
and low-dimension representations of raw data locally, which helps reduce
global model parameters and prevents data leakage. Sign-based SGD with the
majority vote method for model updates also helps alleviate communication
limitations. Empirically, we show that our system is an efficient and privacy
preserving federated learning scheme and suits for general application
scenarios.
Related papers
- Federated Learning with MMD-based Early Stopping for Adaptive GNSS Interference Classification [4.674584508653125]
Federated learning (FL) enables multiple devices to collaboratively train a global model while maintaining data on local servers.
We propose an FL approach using few-shot learning and aggregation of the model weights on a global server.
An exemplary application of FL is orchestrating machine learning models along highways for interference classification based on snapshots from global navigation satellite system (GNSS) receivers.
arXiv Detail & Related papers (2024-10-21T06:43:04Z) - Tunable Soft Prompts are Messengers in Federated Learning [55.924749085481544]
Federated learning (FL) enables multiple participants to collaboratively train machine learning models using decentralized data sources.
The lack of model privacy protection in FL becomes an unneglectable challenge.
We propose a novel FL training approach that accomplishes information exchange among participants via tunable soft prompts.
arXiv Detail & Related papers (2023-11-12T11:01:10Z) - Rethinking Client Drift in Federated Learning: A Logit Perspective [125.35844582366441]
Federated Learning (FL) enables multiple clients to collaboratively learn in a distributed way, allowing for privacy protection.
We find that the difference in logits between the local and global models increases as the model is continuously updated.
We propose a new algorithm, named FedCSD, a Class prototype Similarity Distillation in a federated framework to align the local and global models.
arXiv Detail & Related papers (2023-08-20T04:41:01Z) - Parallel Successive Learning for Dynamic Distributed Model Training over
Heterogeneous Wireless Networks [50.68446003616802]
Federated learning (FedL) has emerged as a popular technique for distributing model training over a set of wireless devices.
We develop parallel successive learning (PSL), which expands the FedL architecture along three dimensions.
Our analysis sheds light on the notion of cold vs. warmed up models, and model inertia in distributed machine learning.
arXiv Detail & Related papers (2022-02-07T05:11:01Z) - Federated Learning via Plurality Vote [38.778944321534084]
Federated learning allows collaborative workers to solve a machine learning problem while preserving data privacy.
Recent studies have tackled various challenges in federated learning.
We propose a new scheme named federated learning via plurality vote (FedVote)
arXiv Detail & Related papers (2021-10-06T18:16:22Z) - WAFFLe: Weight Anonymized Factorization for Federated Learning [88.44939168851721]
In domains where data are sensitive or private, there is great value in methods that can learn in a distributed manner without the data ever leaving the local devices.
We propose Weight Anonymized Factorization for Federated Learning (WAFFLe), an approach that combines the Indian Buffet Process with a shared dictionary of weight factors for neural networks.
arXiv Detail & Related papers (2020-08-13T04:26:31Z) - Continual Local Training for Better Initialization of Federated Models [14.289213162030816]
Federated learning (FL) refers to the learning paradigm that trains machine learning models directly in decentralized systems.
The popular FL algorithm emphFederated Averaging (FedAvg) suffers from weight divergence.
We propose the local continual training strategy to address this problem.
arXiv Detail & Related papers (2020-05-26T12:27:31Z) - Think Locally, Act Globally: Federated Learning with Local and Global
Representations [92.68484710504666]
Federated learning is a method of training models on private data distributed over multiple devices.
We propose a new federated learning algorithm that jointly learns compact local representations on each device.
We also evaluate on the task of personalized mood prediction from real-world mobile data where privacy is key.
arXiv Detail & Related papers (2020-01-06T12:40:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.