Federated Split Learning with Only Positive Labels for
resource-constrained IoT environment
- URL: http://arxiv.org/abs/2307.13266v1
- Date: Tue, 25 Jul 2023 05:33:06 GMT
- Title: Federated Split Learning with Only Positive Labels for
resource-constrained IoT environment
- Authors: Praveen Joshi, Chandra Thapa, Mohammed Hasanuzzaman, Ted Scully, and
Haithem Afli
- Abstract summary: Distributed collaborative machine learning (DCML) is a promising method in the Internet of Things (IoT) domain for training deep learning models.
We propose splitfed learning, known as splitfed learning (SFL), is the most suitable for efficient training and testing when devices have limited computational capabilities.
We show that SFPL outperforms SFL when resource-constrained IoT devices have only positive labeled data.
- Score: 4.055662817794178
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Distributed collaborative machine learning (DCML) is a promising method in
the Internet of Things (IoT) domain for training deep learning models, as data
is distributed across multiple devices. A key advantage of this approach is
that it improves data privacy by removing the necessity for the centralized
aggregation of raw data but also empowers IoT devices with low computational
power. Among various techniques in a DCML framework, federated split learning,
known as splitfed learning (SFL), is the most suitable for efficient training
and testing when devices have limited computational capabilities. Nevertheless,
when resource-constrained IoT devices have only positive labeled data,
multiclass classification deep learning models in SFL fail to converge or
provide suboptimal results. To overcome these challenges, we propose splitfed
learning with positive labels (SFPL). SFPL applies a random shuffling function
to the smashed data received from clients before supplying it to the server for
model training. Additionally, SFPL incorporates the local batch normalization
for the client-side model portion during the inference phase. Our results
demonstrate that SFPL outperforms SFL: (i) by factors of 51.54 and 32.57 for
ResNet-56 and ResNet-32, respectively, with the CIFAR-100 dataset, and (ii) by
factors of 9.23 and 8.52 for ResNet-32 and ResNet-8, respectively, with
CIFAR-10 dataset. Overall, this investigation underscores the efficacy of the
proposed SFPL framework in DCML.
Related papers
- Enhancing Federated Learning Convergence with Dynamic Data Queue and Data Entropy-driven Participant Selection [13.825031686864559]
Federated Learning (FL) is a decentralized approach for collaborative model training on edge devices.
We present a method to improve convergence in FL by creating a global subset of data on the server and dynamically distributing it across devices.
Our approach results in a substantial accuracy boost of approximately 5% for the MNIST dataset, around 18% for CIFAR-10, and 20% for CIFAR-100 with a 10% global subset of data, outperforming the state-of-the-art (SOTA) aggregation algorithms.
arXiv Detail & Related papers (2024-10-23T11:47:04Z) - Efficient Federated Intrusion Detection in 5G ecosystem using optimized BERT-based model [0.7100520098029439]
5G offers advanced services, supporting applications such as intelligent transportation, connected healthcare, and smart cities within the Internet of Things (IoT)
These advancements introduce significant security challenges, with increasingly sophisticated cyber-attacks.
This paper proposes a robust intrusion detection system (IDS) using federated learning and large language models (LLMs)
arXiv Detail & Related papers (2024-09-28T15:56:28Z) - R-SFLLM: Jamming Resilient Framework for Split Federated Learning with Large Language Models [83.77114091471822]
Split federated learning (SFL) is a compute-efficient paradigm in distributed machine learning (ML)
A challenge in SFL, particularly when deployed over wireless channels, is the susceptibility of transmitted model parameters to adversarial jamming.
This is particularly pronounced for word embedding parameters in large language models (LLMs), which are crucial for language understanding.
A physical layer framework is developed for resilient SFL with LLMs (R-SFLLM) over wireless networks.
arXiv Detail & Related papers (2024-07-16T12:21:29Z) - SpaFL: Communication-Efficient Federated Learning with Sparse Models and Low computational Overhead [75.87007729801304]
SpaFL: a communication-efficient FL framework is proposed to optimize sparse model structures with low computational overhead.
Experiments show that SpaFL improves accuracy while requiring much less communication and computing resources compared to sparse baselines.
arXiv Detail & Related papers (2024-06-01T13:10:35Z) - Communication Efficient ConFederated Learning: An Event-Triggered SAGA
Approach [67.27031215756121]
Federated learning (FL) is a machine learning paradigm that targets model training without gathering the local data over various data sources.
Standard FL, which employs a single server, can only support a limited number of users, leading to degraded learning capability.
In this work, we consider a multi-server FL framework, referred to as emphConfederated Learning (CFL) in order to accommodate a larger number of users.
arXiv Detail & Related papers (2024-02-28T03:27:10Z) - Adaptive Model Pruning and Personalization for Federated Learning over
Wireless Networks [72.59891661768177]
Federated learning (FL) enables distributed learning across edge devices while protecting data privacy.
We consider a FL framework with partial model pruning and personalization to overcome these challenges.
This framework splits the learning model into a global part with model pruning shared with all devices to learn data representations and a personalized part to be fine-tuned for a specific device.
arXiv Detail & Related papers (2023-09-04T21:10:45Z) - SemiSFL: Split Federated Learning on Unlabeled and Non-IID Data [34.49090830845118]
Federated Learning (FL) has emerged to allow multiple clients to collaboratively train machine learning models on their private data at the network edge.
We propose a novel Semi-supervised SFL system, termed SemiSFL, which incorporates clustering regularization to perform SFL with unlabeled and non-IID client data.
Our system provides a 3.8x speed-up in training time, reduces the communication cost by about 70.3% while reaching the target accuracy, and achieves up to 5.8% improvement in accuracy under non-IID scenarios.
arXiv Detail & Related papers (2023-07-29T02:35:37Z) - Online Data Selection for Federated Learning with Limited Storage [53.46789303416799]
Federated Learning (FL) has been proposed to achieve distributed machine learning among networked devices.
The impact of on-device storage on the performance of FL is still not explored.
In this work, we take the first step to consider the online data selection for FL with limited on-device storage.
arXiv Detail & Related papers (2022-09-01T03:27:33Z) - Multi-Edge Server-Assisted Dynamic Federated Learning with an Optimized
Floating Aggregation Point [51.47520726446029]
cooperative edge learning (CE-FL) is a distributed machine learning architecture.
We model the processes taken during CE-FL, and conduct analytical training.
We show the effectiveness of our framework with the data collected from a real-world testbed.
arXiv Detail & Related papers (2022-03-26T00:41:57Z) - Ternary Compression for Communication-Efficient Federated Learning [17.97683428517896]
Federated learning provides a potential solution to privacy-preserving and secure machine learning.
We propose a ternary federated averaging protocol (T-FedAvg) to reduce the upstream and downstream communication of federated learning systems.
Our results show that the proposed T-FedAvg is effective in reducing communication costs and can even achieve slightly better performance on non-IID data.
arXiv Detail & Related papers (2020-03-07T11:55:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.