Tram-FL: Routing-based Model Training for Decentralized Federated
Learning
- URL: http://arxiv.org/abs/2308.04762v1
- Date: Wed, 9 Aug 2023 07:51:07 GMT
- Title: Tram-FL: Routing-based Model Training for Decentralized Federated
Learning
- Authors: Kota Maejima, Takayuki Nishio, Asato Yamazaki, and Yuko Hara-Azumi
- Abstract summary: We propose Tram-FL, a novel DFL method, which progressively refines a global model by transferring it sequentially amongst nodes.
We also introduce a dynamic model routing algorithm for optimal route selection, aimed at enhancing model precision with minimal forwarding.
Our experiments using MNIST, CIFAR-10, and IMDb datasets demonstrate that Tram-FL with the proposed routing delivers high model accuracy under non-IID conditions.
- Score: 2.8558942410497066
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In decentralized federated learning (DFL), substantial traffic from frequent
inter-node communication and non-independent and identically distributed
(non-IID) data challenges high-accuracy model acquisition. We propose Tram-FL,
a novel DFL method, which progressively refines a global model by transferring
it sequentially amongst nodes, rather than by exchanging and aggregating local
models. We also introduce a dynamic model routing algorithm for optimal route
selection, aimed at enhancing model precision with minimal forwarding. Our
experiments using MNIST, CIFAR-10, and IMDb datasets demonstrate that Tram-FL
with the proposed routing delivers high model accuracy under non-IID
conditions, outperforming baselines while reducing communication costs.
Related papers
- Local Superior Soups: A Catalyst for Model Merging in Cross-Silo Federated Learning [33.88701368538447]
We propose an innovative model-based local training technique called Local Superior Soups''
Our method enhances local training across different clients, encouraging the exploration of a connected low-loss basin.
We demonstrated its effectiveness and efficiency across diverse widely-used FL datasets.
arXiv Detail & Related papers (2024-10-31T06:20:17Z) - One-Shot Sequential Federated Learning for Non-IID Data by Enhancing Local Model Diversity [26.09617693587105]
We improve the one-shot sequential federated learning for non-IID data by proposing a local model diversity-enhancing strategy.
Our method exhibits superior performance to existing one-shot PFL methods and achieves better accuracy compared with state-of-the-art one-shot SFL methods.
arXiv Detail & Related papers (2024-04-18T12:31:48Z) - Federated Learning with Projected Trajectory Regularization [65.6266768678291]
Federated learning enables joint training of machine learning models from distributed clients without sharing their local data.
One key challenge in federated learning is to handle non-identically distributed data across the clients.
We propose a novel federated learning framework with projected trajectory regularization (FedPTR) for tackling the data issue.
arXiv Detail & Related papers (2023-12-22T02:12:08Z) - Tunable Soft Prompts are Messengers in Federated Learning [55.924749085481544]
Federated learning (FL) enables multiple participants to collaboratively train machine learning models using decentralized data sources.
The lack of model privacy protection in FL becomes an unneglectable challenge.
We propose a novel FL training approach that accomplishes information exchange among participants via tunable soft prompts.
arXiv Detail & Related papers (2023-11-12T11:01:10Z) - pFedLoRA: Model-Heterogeneous Personalized Federated Learning with LoRA
Tuning [35.59830784463706]
Federated learning (FL) is an emerging machine learning paradigm in which a central server coordinates multiple participants (clients) collaboratively to train on decentralized data.
We propose a novel and efficient model-heterogeneous personalized Federated learning framework based on LoRA tuning (pFedLoRA)
Experiments on two benchmark datasets demonstrate that pFedLoRA outperforms six state-of-the-art baselines.
arXiv Detail & Related papers (2023-10-20T05:24:28Z) - Communication Resources Constrained Hierarchical Federated Learning for
End-to-End Autonomous Driving [67.78611905156808]
This paper proposes an optimization-based Communication Resource Constrained Hierarchical Federated Learning framework.
Results show that the proposed CRCHFL both accelerates the convergence rate and enhances the generalization of federated learning autonomous driving model.
arXiv Detail & Related papers (2023-06-28T12:44:59Z) - Boost Decentralized Federated Learning in Vehicular Networks by
Diversifying Data Sources [16.342217928468227]
We propose the DFL-DDS (DFL with diversified Data Sources) algorithm to diversify data sources in DFL.
Specifically, each vehicle maintains a state vector to record the contribution weight of each data source to its model.
To boost the convergence of DFL, a vehicle tunes the aggregation weight of each data source by minimizing the KL divergence of its state vector.
arXiv Detail & Related papers (2022-09-05T04:01:41Z) - FedDM: Iterative Distribution Matching for Communication-Efficient
Federated Learning [87.08902493524556]
Federated learning(FL) has recently attracted increasing attention from academia and industry.
We propose FedDM to build the global training objective from multiple local surrogate functions.
In detail, we construct synthetic sets of data on each client to locally match the loss landscape from original data.
arXiv Detail & Related papers (2022-07-20T04:55:18Z) - Unit-Modulus Wireless Federated Learning Via Penalty Alternating
Minimization [64.76619508293966]
Wireless federated learning (FL) is an emerging machine learning paradigm that trains a global parametric model from distributed datasets via wireless communications.
This paper proposes a wireless FL framework, which uploads local model parameters and computes global model parameters via wireless communications.
arXiv Detail & Related papers (2021-08-31T08:19:54Z) - Federated Learning With Quantized Global Model Updates [84.55126371346452]
We study federated learning, which enables mobile devices to utilize their local datasets to train a global model.
We introduce a lossy FL (LFL) algorithm, in which both the global model and the local model updates are quantized before being transmitted.
arXiv Detail & Related papers (2020-06-18T16:55:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.