End-to-End Evaluation of Federated Learning and Split Learning for
Internet of Things
- URL: http://arxiv.org/abs/2003.13376v2
- Date: Sun, 2 Aug 2020 08:51:07 GMT
- Title: End-to-End Evaluation of Federated Learning and Split Learning for
Internet of Things
- Authors: Yansong Gao, Minki Kim, Sharif Abuadbba, Yeonjae Kim, Chandra Thapa,
Kyuyeon Kim, Seyit A. Camtepe, Hyoungshick Kim, Surya Nepal
- Abstract summary: This work is the first attempt to evaluate and compare felderated learning (FL) and split neural networks (SplitNN) in real-world IoT settings.
For learning performance, we empirically evaluate both FL and SplitNN under different types of data distributions.
We show that the learning performance of SplitNN is better than FL under an imbalanced data distribution, but worse than FL under an extreme non-IID data distribution.
- Score: 30.47229934329905
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This work is the first attempt to evaluate and compare felderated learning
(FL) and split neural networks (SplitNN) in real-world IoT settings in terms of
learning performance and device implementation overhead. We consider a variety
of datasets, different model architectures, multiple clients, and various
performance metrics. For learning performance, which is specified by the model
accuracy and convergence speed metrics, we empirically evaluate both FL and
SplitNN under different types of data distributions such as imbalanced and
non-independent and identically distributed (non-IID) data. We show that the
learning performance of SplitNN is better than FL under an imbalanced data
distribution, but worse than FL under an extreme non-IID data distribution. For
implementation overhead, we end-to-end mount both FL and SplitNN on Raspberry
Pis, and comprehensively evaluate overheads including training time,
communication overhead under the real LAN setting, power consumption and memory
usage. Our key observations are that under IoT scenario where the communication
traffic is the main concern, the FL appears to perform better over SplitNN
because FL has the significantly lower communication overhead compared with
SplitNN, which empirically corroborate previous statistical analysis. In
addition, we reveal several unrecognized limitations about SplitNN, forming the
basis for future research.
Related papers
- A Universal Metric of Dataset Similarity for Cross-silo Federated Learning [0.0]
Federated learning is increasingly used in domains such as healthcare to facilitate model training without data-sharing.
In this paper, we propose a novel metric for assessing dataset similarity.
We show that our metric shows a robust and interpretable relationship with model performance and can be calculated in privacy-preserving manner.
arXiv Detail & Related papers (2024-04-29T15:08:24Z) - FedConv: Enhancing Convolutional Neural Networks for Handling Data
Heterogeneity in Federated Learning [34.37155882617201]
Federated learning (FL) is an emerging paradigm in machine learning, where a shared model is collaboratively learned using data from multiple devices.
We systematically investigate the impact of different architectural elements, such as activation functions and normalization layers, on the performance within heterogeneous FL.
Our findings indicate that with strategic architectural modifications, pure CNNs can achieve a level of robustness that either matches or even exceeds that of ViTs.
arXiv Detail & Related papers (2023-10-06T17:57:50Z) - Semi-Federated Learning: Convergence Analysis and Optimization of A
Hybrid Learning Framework [70.83511997272457]
We propose a semi-federated learning (SemiFL) paradigm to leverage both the base station (BS) and devices for a hybrid implementation of centralized learning (CL) and FL.
We propose a two-stage algorithm to solve this intractable problem, in which we provide the closed-form solutions to the beamformers.
arXiv Detail & Related papers (2023-10-04T03:32:39Z) - Adaptive Model Pruning and Personalization for Federated Learning over
Wireless Networks [72.59891661768177]
Federated learning (FL) enables distributed learning across edge devices while protecting data privacy.
We consider a FL framework with partial model pruning and personalization to overcome these challenges.
This framework splits the learning model into a global part with model pruning shared with all devices to learn data representations and a personalized part to be fine-tuned for a specific device.
arXiv Detail & Related papers (2023-09-04T21:10:45Z) - Enhancing Efficiency in Multidevice Federated Learning through Data Selection [11.67484476827617]
Federated learning (FL) in multidevice environments creates new opportunities to learn from a vast and diverse amount of private data.
In this paper, we develop an FL framework to incorporate on-device data selection on such constrained devices.
We show that our framework achieves 19% higher accuracy and 58% lower latency; compared to the baseline FL without our implemented strategies.
arXiv Detail & Related papers (2022-11-08T11:39:17Z) - Online Data Selection for Federated Learning with Limited Storage [53.46789303416799]
Federated Learning (FL) has been proposed to achieve distributed machine learning among networked devices.
The impact of on-device storage on the performance of FL is still not explored.
In this work, we take the first step to consider the online data selection for FL with limited on-device storage.
arXiv Detail & Related papers (2022-09-01T03:27:33Z) - SlimFL: Federated Learning with Superposition Coding over Slimmable
Neural Networks [56.68149211499535]
Federated learning (FL) is a key enabler for efficient communication and computing leveraging devices' distributed computing capabilities.
This paper proposes a novel learning framework by integrating FL and width-adjustable slimmable neural networks (SNNs)
We propose a communication and energy-efficient SNN-based FL (named SlimFL) that jointly utilizes superposition coding (SC) for global model aggregation and superposition training (ST) for updating local models.
arXiv Detail & Related papers (2022-03-26T15:06:13Z) - Joint Superposition Coding and Training for Federated Learning over
Multi-Width Neural Networks [52.93232352968347]
This paper aims to integrate two synergetic technologies, federated learning (FL) and width-adjustable slimmable neural network (SNN)
FL preserves data privacy by exchanging the locally trained models of mobile devices. SNNs are however non-trivial, particularly under wireless connections with time-varying channel conditions.
We propose a communication and energy-efficient SNN-based FL (named SlimFL) that jointly utilizes superposition coding (SC) for global model aggregation and superposition training (ST) for updating local models.
arXiv Detail & Related papers (2021-12-05T11:17:17Z) - Communication-Efficient Hierarchical Federated Learning for IoT
Heterogeneous Systems with Imbalanced Data [42.26599494940002]
Federated learning (FL) is a distributed learning methodology that allows multiple nodes to cooperatively train a deep learning model.
This paper studies the potential of hierarchical FL in IoT heterogeneous systems.
It proposes an optimized solution for user assignment and resource allocation on multiple edge nodes.
arXiv Detail & Related papers (2021-07-14T08:32:39Z) - Over-the-Air Federated Learning from Heterogeneous Data [107.05618009955094]
Federated learning (FL) is a framework for distributed learning of centralized models.
We develop a Convergent OTA FL (COTAF) algorithm which enhances the common local gradient descent (SGD) FL algorithm.
We numerically show that the precoding induced by COTAF notably improves the convergence rate and the accuracy of models trained via OTA FL.
arXiv Detail & Related papers (2020-09-27T08:28:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.