Towards a Federated Learning Framework for Heterogeneous Devices of
Internet of Things
- URL: http://arxiv.org/abs/2105.14675v1
- Date: Mon, 31 May 2021 02:08:36 GMT
- Title: Towards a Federated Learning Framework for Heterogeneous Devices of
Internet of Things
- Authors: Huanle Zhang, Jeonghoon Kim
- Abstract summary: Federated Learning (FL) has received a significant amount of attention in the industry and research community.
In this paper, we propose an FL framework targeting the heterogeneity of IoT devices.
We conduct preliminary experiments to illustrate that our framework can facilitate the design of IoT-aware FL.
- Score: 0.30458514384586394
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL) has received a significant amount of attention in the
industry and research community due to its capability of keeping data on local
devices. To aggregate the gradients of local models to train the global model,
existing works require that the global model and the local models are the same.
However, Internet of Things (IoT) devices are inherently diverse regarding
computation speed and onboard memory. In this paper, we propose an FL framework
targeting the heterogeneity of IoT devices. Specifically, local models are
compressed from the global model, and the gradients of the compressed local
models are used to update the global model. We conduct preliminary experiments
to illustrate that our framework can facilitate the design of IoT-aware FL.
Related papers
- Tunable Soft Prompts are Messengers in Federated Learning [55.924749085481544]
Federated learning (FL) enables multiple participants to collaboratively train machine learning models using decentralized data sources.
The lack of model privacy protection in FL becomes an unneglectable challenge.
We propose a novel FL training approach that accomplishes information exchange among participants via tunable soft prompts.
arXiv Detail & Related papers (2023-11-12T11:01:10Z) - Rethinking Client Drift in Federated Learning: A Logit Perspective [125.35844582366441]
Federated Learning (FL) enables multiple clients to collaboratively learn in a distributed way, allowing for privacy protection.
We find that the difference in logits between the local and global models increases as the model is continuously updated.
We propose a new algorithm, named FedCSD, a Class prototype Similarity Distillation in a federated framework to align the local and global models.
arXiv Detail & Related papers (2023-08-20T04:41:01Z) - Fed-FSNet: Mitigating Non-I.I.D. Federated Learning via Fuzzy
Synthesizing Network [19.23943687834319]
Federated learning (FL) has emerged as a promising privacy-preserving distributed machine learning framework.
We propose a novel FL training framework, dubbed Fed-FSNet, using a properly designed Fuzzy Synthesizing Network (FSNet) to mitigate the Non-I.I.D. at-the-source issue.
arXiv Detail & Related papers (2022-08-21T18:40:51Z) - Fine-tuning Global Model via Data-Free Knowledge Distillation for
Non-IID Federated Learning [86.59588262014456]
Federated Learning (FL) is an emerging distributed learning paradigm under privacy constraint.
We propose a data-free knowledge distillation method to fine-tune the global model in the server (FedFTG)
Our FedFTG significantly outperforms the state-of-the-art (SOTA) FL algorithms and can serve as a strong plugin for enhancing FedAvg, FedProx, FedDyn, and SCAFFOLD.
arXiv Detail & Related papers (2022-03-17T11:18:17Z) - FedHM: Efficient Federated Learning for Heterogeneous Models via
Low-rank Factorization [16.704006420306353]
A scalable federated learning framework should address heterogeneous clients equipped with different computation and communication capabilities.
This paper proposes FedHM, a novel federated model compression framework that distributes the heterogeneous low-rank models to clients and then aggregates them into a global full-rank model.
Our solution enables the training of heterogeneous local models with varying computational complexities and aggregates a single global model.
arXiv Detail & Related papers (2021-11-29T16:11:09Z) - Gradual Federated Learning with Simulated Annealing [26.956032164461377]
Federated averaging (FedAvg) is a popular federated learning (FL) technique that updates the global model by averaging local models.
In this paper, we propose a new FL technique based on simulated annealing.
We show that SAFL outperforms the conventional FedAvg technique in terms of the convergence speed and the classification accuracy.
arXiv Detail & Related papers (2021-10-11T11:57:56Z) - Federated Learning with Downlink Device Selection [92.14944020945846]
We study federated edge learning, where a global model is trained collaboratively using privacy-sensitive data at the edge of a wireless network.
A parameter server (PS) keeps track of the global model and shares it with the wireless edge devices for training using their private local data.
We consider device selection based on downlink channels over which the PS shares the global model with the devices.
arXiv Detail & Related papers (2021-07-07T22:42:39Z) - Federated Learning With Quantized Global Model Updates [84.55126371346452]
We study federated learning, which enables mobile devices to utilize their local datasets to train a global model.
We introduce a lossy FL (LFL) algorithm, in which both the global model and the local model updates are quantized before being transmitted.
arXiv Detail & Related papers (2020-06-18T16:55:20Z) - Think Locally, Act Globally: Federated Learning with Local and Global
Representations [92.68484710504666]
Federated learning is a method of training models on private data distributed over multiple devices.
We propose a new federated learning algorithm that jointly learns compact local representations on each device.
We also evaluate on the task of personalized mood prediction from real-world mobile data where privacy is key.
arXiv Detail & Related papers (2020-01-06T12:40:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.