E-Tree Learning: A Novel Decentralized Model Learning Framework for Edge
AI
- URL: http://arxiv.org/abs/2008.01553v2
- Date: Thu, 14 Jan 2021 05:15:24 GMT
- Title: E-Tree Learning: A Novel Decentralized Model Learning Framework for Edge
AI
- Authors: Lei Yang, Yanyan Lu, Jiannong Cao, Jiaming Huang, Mingjin Zhang
- Abstract summary: Edge empowered AI, namely Edge AI, has been proposed to support AI model learning and deployment at the network edge closer to the data sources.
In this paper, we propose a novel decentralized model learning approach, namely E-Tree, which makes use of a well-designed tree structure imposed on the edge devices.
- Score: 18.53971408174349
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Traditionally, AI models are trained on the central cloud with data collected
from end devices. This leads to high communication cost, long response time and
privacy concerns. Recently Edge empowered AI, namely Edge AI, has been proposed
to support AI model learning and deployment at the network edge closer to the
data sources. Existing research including federated learning adopts a
centralized architecture for model learning where a central server aggregates
the model updates from the clients/workers. The centralized architecture has
drawbacks such as performance bottleneck, poor scalability and single point of
failure. In this paper, we propose a novel decentralized model learning
approach, namely E-Tree, which makes use of a well-designed tree structure
imposed on the edge devices. The tree structure and the locations and orders of
aggregation on the tree are optimally designed to improve the training
convergency and model accuracy. In particular, we design an efficient device
clustering algorithm, named by KMA, for E-Tree by taking into account the data
distribution on the devices as well as the the network distance. Evaluation
results show E-Tree significantly outperforms the benchmark approaches such as
federated learning and Gossip learning under NonIID data in terms of model
accuracy and convergency.
Related papers
- Task-Oriented Real-time Visual Inference for IoVT Systems: A Co-design Framework of Neural Networks and Edge Deployment [61.20689382879937]
Task-oriented edge computing addresses this by shifting data analysis to the edge.
Existing methods struggle to balance high model performance with low resource consumption.
We propose a novel co-design framework to optimize neural network architecture.
arXiv Detail & Related papers (2024-10-29T19:02:54Z) - Federated Learning Architectures: A Performance Evaluation with Crop Yield Prediction Application [22.173280246644044]
This paper implements centralized and decentralized federated learning frameworks for crop yield prediction based on Long Short-Term Memory Network.
The performance of the two frameworks is evaluated in terms of prediction accuracy, precision, recall, F1-Score, and training time.
arXiv Detail & Related papers (2024-08-06T07:05:56Z) - EdgeConvEns: Convolutional Ensemble Learning for Edge Intelligence [0.0]
Deep edge intelligence aims to deploy deep learning models that demand computationally expensive training in the edge network with limited computational power.
This study proposes a convolutional ensemble learning approach, coined EdgeConvEns, that facilitates training heterogeneous weak models on edge and learning to ensemble them where data on edge are heterogeneously distributed.
arXiv Detail & Related papers (2023-07-25T20:07:32Z) - NCART: Neural Classification and Regression Tree for Tabular Data [0.5439020425819]
NCART is a modified version of Residual Networks that replaces fully-connected layers with multiple differentiable oblivious decision trees.
It maintains its interpretability while benefiting from the end-to-end capabilities of neural networks.
The simplicity of the NCART architecture makes it well-suited for datasets of varying sizes.
arXiv Detail & Related papers (2023-07-23T01:27:26Z) - Privacy-Preserving Ensemble Infused Enhanced Deep Neural Network
Framework for Edge Cloud Convergence [18.570317928688606]
We propose a privacy-preserving ensemble infused enhanced Deep Neural Network (DNN) based learning framework in this paper.
In the convergence, edge server is used for both storing IoT produced bioimage and hosting algorithm for local model training.
We conduct several experiments to evaluate the performance of our proposed framework.
arXiv Detail & Related papers (2023-05-16T07:01:44Z) - Lumos: Heterogeneity-aware Federated Graph Learning over Decentralized
Devices [19.27111697495379]
Graph neural networks (GNNs) have been widely deployed in real-world networked applications and systems.
We propose the first federated GNN framework called Lumos that supports supervised and unsupervised learning.
Based on the constructed tree for each client, a decentralized tree-based GNN trainer is proposed to support versatile training.
arXiv Detail & Related papers (2023-03-01T13:27:06Z) - Neural Attentive Circuits [93.95502541529115]
We introduce a general purpose, yet modular neural architecture called Neural Attentive Circuits (NACs)
NACs learn the parameterization and a sparse connectivity of neural modules without using domain knowledge.
NACs achieve an 8x speedup at inference time while losing less than 3% performance.
arXiv Detail & Related papers (2022-10-14T18:00:07Z) - An Expectation-Maximization Perspective on Federated Learning [75.67515842938299]
Federated learning describes the distributed training of models across multiple clients while keeping the data private on-device.
In this work, we view the server-orchestrated federated learning process as a hierarchical latent variable model where the server provides the parameters of a prior distribution over the client-specific model parameters.
We show that with simple Gaussian priors and a hard version of the well known Expectation-Maximization (EM) algorithm, learning in such a model corresponds to FedAvg, the most popular algorithm for the federated learning setting.
arXiv Detail & Related papers (2021-11-19T12:58:59Z) - Federated Learning with Unreliable Clients: Performance Analysis and
Mechanism Design [76.29738151117583]
Federated Learning (FL) has become a promising tool for training effective machine learning models among distributed clients.
However, low quality models could be uploaded to the aggregator server by unreliable clients, leading to a degradation or even a collapse of training.
We model these unreliable behaviors of clients and propose a defensive mechanism to mitigate such a security risk.
arXiv Detail & Related papers (2021-05-10T08:02:27Z) - ANNETTE: Accurate Neural Network Execution Time Estimation with Stacked
Models [56.21470608621633]
We propose a time estimation framework to decouple the architectural search from the target hardware.
The proposed methodology extracts a set of models from micro- kernel and multi-layer benchmarks and generates a stacked model for mapping and network execution time estimation.
We compare estimation accuracy and fidelity of the generated mixed models, statistical models with the roofline model, and a refined roofline model for evaluation.
arXiv Detail & Related papers (2021-05-07T11:39:05Z) - Edge-assisted Democratized Learning Towards Federated Analytics [67.44078999945722]
We show the hierarchical learning structure of the proposed edge-assisted democratized learning mechanism, namely Edge-DemLearn.
We also validate Edge-DemLearn as a flexible model training mechanism to build a distributed control and aggregation methodology in regions.
arXiv Detail & Related papers (2020-12-01T11:46:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.