Optimizing Resource-Efficiency for Federated Edge Intelligence in IoT
Networks
- URL: http://arxiv.org/abs/2011.12691v1
- Date: Wed, 25 Nov 2020 12:51:59 GMT
- Title: Optimizing Resource-Efficiency for Federated Edge Intelligence in IoT
Networks
- Authors: Yong Xiao and Yingyu Li and Guangming Shi and H. Vincent Poor
- Abstract summary: We study an edge intelligence-based IoT network in which a set of edge servers learn a shared model using federated learning (FL)
We propose a novel framework, called federated edge intelligence (FEI), that allows edge servers to evaluate the required number of data samples according to the energy cost of the IoT network.
We prove that our proposed algorithm does not cause any data leakage nor disclose any topological information of the IoT network.
- Score: 96.24723959137218
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper studies an edge intelligence-based IoT network in which a set of
edge servers learn a shared model using federated learning (FL) based on the
datasets uploaded from a multi-technology-supported IoT network. The data
uploading performance of IoT network and the computational capacity of edge
servers are entangled with each other in influencing the FL model training
process. We propose a novel framework, called federated edge intelligence
(FEI), that allows edge servers to evaluate the required number of data samples
according to the energy cost of the IoT network as well as their local data
processing capacity and only request the amount of data that is sufficient for
training a satisfactory model. We evaluate the energy cost for data uploading
when two widely-used IoT solutions: licensed band IoT (e.g., 5G NB-IoT) and
unlicensed band IoT (e.g., Wi-Fi, ZigBee, and 5G NR-U) are available to each
IoT device. We prove that the cost minimization problem of the entire IoT
network is separable and can be divided into a set of subproblems, each of
which can be solved by an individual edge server. We also introduce a mapping
function to quantify the computational load of edge servers under different
combinations of three key parameters: size of the dataset, local batch size,
and number of local training passes. Finally, we adopt an Alternative Direction
Method of Multipliers (ADMM)-based approach to jointly optimize energy cost of
the IoT network and average computing resource utilization of edge servers. We
prove that our proposed algorithm does not cause any data leakage nor disclose
any topological information of the IoT network. Simulation results show that
our proposed framework significantly improves the resource efficiency of the
IoT network and edge servers with only a limited sacrifice on the model
convergence performance.
Related papers
- Edge-assisted U-Shaped Split Federated Learning with Privacy-preserving
for Internet of Things [4.68267059122563]
We present an innovative Edge-assisted U-Shaped Split Federated Learning (EUSFL) framework, which harnesses the high-performance capabilities of edge servers.
In this framework, we leverage Federated Learning (FL) to enable data holders to collaboratively train models without sharing their data.
We also propose a novel noise mechanism called LabelDP to ensure that data features and labels can securely resist reconstruction attacks.
arXiv Detail & Related papers (2023-11-08T05:14:41Z) - Service Discovery in Social Internet of Things using Graph Neural
Networks [1.552282932199974]
Internet-of-Things (IoT) networks intelligently connect thousands of physical entities to provide various services for the community.
It is witnessing an exponential expansion, which is complicating the process of discovering IoT devices existing in the network and requesting corresponding services from them.
We propose a scalable resource allocation neural model adequate for heterogeneous large-scale IoT networks.
arXiv Detail & Related papers (2022-05-25T12:25:37Z) - Multi-Edge Server-Assisted Dynamic Federated Learning with an Optimized
Floating Aggregation Point [51.47520726446029]
cooperative edge learning (CE-FL) is a distributed machine learning architecture.
We model the processes taken during CE-FL, and conduct analytical training.
We show the effectiveness of our framework with the data collected from a real-world testbed.
arXiv Detail & Related papers (2022-03-26T00:41:57Z) - Computational Intelligence and Deep Learning for Next-Generation
Edge-Enabled Industrial IoT [51.68933585002123]
We investigate how to deploy computational intelligence and deep learning (DL) in edge-enabled industrial IoT networks.
In this paper, we propose a novel multi-exit-based federated edge learning (ME-FEEL) framework.
In particular, the proposed ME-FEEL can achieve an accuracy gain up to 32.7% in the industrial IoT networks with the severely limited resources.
arXiv Detail & Related papers (2021-10-28T08:14:57Z) - A Joint Energy and Latency Framework for Transfer Learning over 5G
Industrial Edge Networks [53.26338041079138]
We propose a transfer learning-enabled edge-CNN framework for 5G industrial edge networks.
In particular, the edge server can use the existing image dataset to train the CNN in advance.
With the aid of TL, the devices that are not participating in the training only need to fine-tune the trained edge-CNN model without training from scratch.
arXiv Detail & Related papers (2021-04-19T15:13:16Z) - The Case for Retraining of ML Models for IoT Device Identification at
the Edge [0.026215338446228163]
We show how to identify IoT devices based on their network behavior using resources available at the edge of the network.
It is possible to achieve device identification and categorization with over 80% and 90% accuracy respectively at the edge.
arXiv Detail & Related papers (2020-11-17T13:01:04Z) - Federated Learning in Mobile Edge Computing: An Edge-Learning
Perspective for Beyond 5G [24.275726025778482]
A novel edge computing-assisted federated learning framework is proposed in this paper.
The communication constraints between IoT devices and edge servers are taken into account.
Various IoT devices have different training datasets which have varying influence on the accuracy of the global model derived at the edge server.
arXiv Detail & Related papers (2020-07-15T22:58:47Z) - Deep Learning for Ultra-Reliable and Low-Latency Communications in 6G
Networks [84.2155885234293]
We first summarize how to apply data-driven supervised deep learning and deep reinforcement learning in URLLC.
To address these open problems, we develop a multi-level architecture that enables device intelligence, edge intelligence, and cloud intelligence for URLLC.
arXiv Detail & Related papers (2020-02-22T14:38:11Z) - Adaptive Anomaly Detection for IoT Data in Hierarchical Edge Computing [71.86955275376604]
We propose an adaptive anomaly detection approach for hierarchical edge computing (HEC) systems to solve this problem.
We design an adaptive scheme to select one of the models based on the contextual information extracted from input data, to perform anomaly detection.
We evaluate our proposed approach using a real IoT dataset, and demonstrate that it reduces detection delay by 84% while maintaining almost the same accuracy as compared to offloading detection tasks to the cloud.
arXiv Detail & Related papers (2020-01-10T05:29:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.