Advancing IIoT with Over-the-Air Federated Learning: The Role of Iterative Magnitude Pruning
- URL: http://arxiv.org/abs/2403.14120v1
- Date: Thu, 21 Mar 2024 04:15:56 GMT
- Title: Advancing IIoT with Over-the-Air Federated Learning: The Role of Iterative Magnitude Pruning
- Authors: Fazal Muhammad Ali Khan, Hatem Abou-Zeid, Aryan Kaushik, Syed Ali Hassan,
- Abstract summary: Industrial Internet of Things (IIoT) under Industry 4.0 heralds an era of interconnected smart devices.
federated learning (FL) addresses data privacy and security among devices.
FL enables edge sensors to learn and adapt using their data locally, without explicit sharing of confidential data.
- Score: 14.818439341517733
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The industrial Internet of Things (IIoT) under Industry 4.0 heralds an era of interconnected smart devices where data-driven insights and machine learning (ML) fuse to revolutionize manufacturing. A noteworthy development in IIoT is the integration of federated learning (FL), which addresses data privacy and security among devices. FL enables edge sensors, also known as peripheral intelligence units (PIUs) to learn and adapt using their data locally, without explicit sharing of confidential data, to facilitate a collaborative yet confidential learning process. However, the lower memory footprint and computational power of PIUs inherently require deep neural network (DNN) models that have a very compact size. Model compression techniques such as pruning can be used to reduce the size of DNN models by removing unnecessary connections that have little impact on the model's performance, thus making the models more suitable for the limited resources of PIUs. Targeting the notion of compact yet robust DNN models, we propose the integration of iterative magnitude pruning (IMP) of the DNN model being trained in an over-the-air FL (OTA-FL) environment for IIoT. We provide a tutorial overview and also present a case study of the effectiveness of IMP in OTA-FL for an IIoT environment. Finally, we present future directions for enhancing and optimizing these deep compression techniques further, aiming to push the boundaries of IIoT capabilities in acquiring compact yet robust and high-performing DNN models.
Related papers
- The Robustness of Spiking Neural Networks in Communication and its Application towards Network Efficiency in Federated Learning [6.9569682335746235]
Spiking Neural Networks (SNNs) have recently gained significant interest in on-chip learning in embedded devices.
In this paper, we explore the inherent robustness of SNNs under noisy communication in Federated Learning.
We propose a novel Federated Learning with TopK Sparsification algorithm to reduce the bandwidth usage for FL training.
arXiv Detail & Related papers (2024-09-19T13:37:18Z) - Fine-Tuning and Deploying Large Language Models Over Edges: Issues and Approaches [64.42735183056062]
Large language models (LLMs) have transitioned from specialized models to versatile foundation models.
LLMs exhibit impressive zero-shot ability, however, require fine-tuning on local datasets and significant resources for deployment.
arXiv Detail & Related papers (2024-08-20T09:42:17Z) - Adaptive Model Pruning and Personalization for Federated Learning over
Wireless Networks [72.59891661768177]
Federated learning (FL) enables distributed learning across edge devices while protecting data privacy.
We consider a FL framework with partial model pruning and personalization to overcome these challenges.
This framework splits the learning model into a global part with model pruning shared with all devices to learn data representations and a personalized part to be fine-tuned for a specific device.
arXiv Detail & Related papers (2023-09-04T21:10:45Z) - Online Data Selection for Federated Learning with Limited Storage [53.46789303416799]
Federated Learning (FL) has been proposed to achieve distributed machine learning among networked devices.
The impact of on-device storage on the performance of FL is still not explored.
In this work, we take the first step to consider the online data selection for FL with limited on-device storage.
arXiv Detail & Related papers (2022-09-01T03:27:33Z) - Deep Reinforcement Learning Assisted Federated Learning Algorithm for
Data Management of IIoT [82.33080550378068]
The continuous expanded scale of the industrial Internet of Things (IIoT) leads to IIoT equipments generating massive amounts of user data every moment.
How to manage these time series data in an efficient and safe way in the field of IIoT is still an open issue.
This paper studies the FL technology applications to manage IIoT equipment data in wireless network environments.
arXiv Detail & Related papers (2022-02-03T07:12:36Z) - Efficient Federated Learning for AIoT Applications Using Knowledge
Distillation [2.5892786553124085]
Federated Learning (FL) trains a central model with decentralized data without compromising user privacy.
Traditional FL suffers from model inaccuracy since it trains local models using hard labels of data.
This paper presents a novel Distillation-based Federated Learning architecture that enables efficient and accurate FL for AIoT applications.
arXiv Detail & Related papers (2021-11-29T06:40:42Z) - Computational Intelligence and Deep Learning for Next-Generation
Edge-Enabled Industrial IoT [51.68933585002123]
We investigate how to deploy computational intelligence and deep learning (DL) in edge-enabled industrial IoT networks.
In this paper, we propose a novel multi-exit-based federated edge learning (ME-FEEL) framework.
In particular, the proposed ME-FEEL can achieve an accuracy gain up to 32.7% in the industrial IoT networks with the severely limited resources.
arXiv Detail & Related papers (2021-10-28T08:14:57Z) - Compact CNN Structure Learning by Knowledge Distillation [34.36242082055978]
We propose a framework that leverages knowledge distillation along with customizable block-wise optimization to learn a lightweight CNN structure.
Our method results in a state of the art network compression while being capable of achieving better inference accuracy.
In particular, for the already compact network MobileNet_v2, our method offers up to 2x and 5.2x better model compression.
arXiv Detail & Related papers (2021-04-19T10:34:22Z) - Adversarially Robust and Explainable Model Compression with On-Device
Personalization for Text Classification [4.805959718658541]
On-device Deep Neural Networks (DNNs) have recently gained more attention due to the increasing computing power of mobile devices and the number of applications in Computer Vision (CV) and Natural Language Processing (NLP)
In NLP applications, although model compression has seen initial success, there are at least three major challenges yet to be addressed: adversarial robustness, explainability, and personalization.
Here we attempt to tackle these challenges by designing a new training scheme for model compression and adversarial robustness, including the optimization of an explainable feature mapping objective.
The resulting compressed model is personalized using on-device private training data via fine-
arXiv Detail & Related papers (2021-01-10T15:06:55Z) - Prune2Edge: A Multi-Phase Pruning Pipelines to Deep Ensemble Learning in
IIoT [0.0]
We propose a novel edge-based multi-phase pruning pipelines to ensemble learning on IIoT devices.
In the first phase, we generate a diverse ensemble of pruned models, then we apply integer quantisation, next we prune the generated ensemble using a clustering-based technique.
Our proposed approach was able to outperform the predictability levels of a baseline model.
arXiv Detail & Related papers (2020-04-09T17:44:34Z) - Deep Learning for Ultra-Reliable and Low-Latency Communications in 6G
Networks [84.2155885234293]
We first summarize how to apply data-driven supervised deep learning and deep reinforcement learning in URLLC.
To address these open problems, we develop a multi-level architecture that enables device intelligence, edge intelligence, and cloud intelligence for URLLC.
arXiv Detail & Related papers (2020-02-22T14:38:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.