OL4EL: Online Learning for Edge-cloud Collaborative Learning on
Heterogeneous Edges with Resource Constraints
- URL: http://arxiv.org/abs/2004.10387v2
- Date: Thu, 23 Apr 2020 08:13:55 GMT
- Title: OL4EL: Online Learning for Edge-cloud Collaborative Learning on
Heterogeneous Edges with Resource Constraints
- Authors: Qing Han, Shusen Yang, Xuebin Ren, Cong Zhao, Jingqi Zhang, Xinyu Yang
- Abstract summary: We propose a novel framework of 'learning to learn' for effective Edge Learning (EL) on heterogeneous edges with resource constraints.
We propose an Online Learning for EL (OL4EL) framework based on the budget-limited multi-armed bandit model.
OL4EL supports both synchronous and asynchronous learning patterns, and can be used for both supervised and unsupervised learning tasks.
- Score: 18.051084376447655
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Distributed machine learning (ML) at network edge is a promising paradigm
that can preserve both network bandwidth and privacy of data providers.
However, heterogeneous and limited computation and communication resources on
edge servers (or edges) pose great challenges on distributed ML and formulate a
new paradigm of Edge Learning (i.e. edge-cloud collaborative machine learning).
In this article, we propose a novel framework of 'learning to learn' for
effective Edge Learning (EL) on heterogeneous edges with resource constraints.
We first model the dynamic determination of collaboration strategy (i.e. the
allocation of local iterations at edge servers and global aggregations on the
Cloud during collaborative learning process) as an online optimization problem
to achieve the tradeoff between the performance of EL and the resource
consumption of edge servers. Then, we propose an Online Learning for EL (OL4EL)
framework based on the budget-limited multi-armed bandit model. OL4EL supports
both synchronous and asynchronous learning patterns, and can be used for both
supervised and unsupervised learning tasks. To evaluate the performance of
OL4EL, we conducted both real-world testbed experiments and extensive
simulations based on docker containers, where both Support Vector Machine and
K-means were considered as use cases. Experimental results demonstrate that
OL4EL significantly outperforms state-of-the-art EL and other collaborative ML
approaches in terms of the trade-off between learning performance and resource
consumption.
Related papers
- Leveraging Federated Learning and Edge Computing for Recommendation
Systems within Cloud Computing Networks [3.36271475827981]
Key technology for edge intelligence is the privacy-protecting machine learning paradigm known as Federated Learning (FL), which enables data owners to train models without having to transfer raw data to third-party servers.
To reduce node failures and device exits, a Hierarchical Federated Learning (HFL) framework is proposed, where a designated cluster leader supports the data owner through intermediate model aggregation.
In order to mitigate the impact of soft clicks on the quality of user experience (QoE), the authors model the user QoE as a comprehensive system cost.
arXiv Detail & Related papers (2024-03-05T17:58:26Z) - Optimal Resource Allocation for U-Shaped Parallel Split Learning [15.069132131105063]
Split learning (SL) has emerged as a promising approach for model training without revealing the raw data samples from the data owners.
Traditional SL inevitably leaks label privacy as the tail model (with the last layers) should be placed on the server.
One promising solution is to utilize U-shaped architecture to leave both early layers and last layers on the user side.
arXiv Detail & Related papers (2023-08-17T10:07:45Z) - EdgeConvEns: Convolutional Ensemble Learning for Edge Intelligence [0.0]
Deep edge intelligence aims to deploy deep learning models that demand computationally expensive training in the edge network with limited computational power.
This study proposes a convolutional ensemble learning approach, coined EdgeConvEns, that facilitates training heterogeneous weak models on edge and learning to ensemble them where data on edge are heterogeneously distributed.
arXiv Detail & Related papers (2023-07-25T20:07:32Z) - Towards Cooperative Federated Learning over Heterogeneous Edge/Fog
Networks [49.19502459827366]
Federated learning (FL) has been promoted as a popular technique for training machine learning (ML) models over edge/fog networks.
Traditional implementations of FL have largely neglected the potential for inter-network cooperation.
We advocate for cooperative federated learning (CFL), a cooperative edge/fog ML paradigm built on device-to-device (D2D) and device-to-server (D2S) interactions.
arXiv Detail & Related papers (2023-03-15T04:41:36Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - Multi-Edge Server-Assisted Dynamic Federated Learning with an Optimized
Floating Aggregation Point [51.47520726446029]
cooperative edge learning (CE-FL) is a distributed machine learning architecture.
We model the processes taken during CE-FL, and conduct analytical training.
We show the effectiveness of our framework with the data collected from a real-world testbed.
arXiv Detail & Related papers (2022-03-26T00:41:57Z) - Asynchronous Semi-Decentralized Federated Edge Learning for
Heterogeneous Clients [3.983055670167878]
Federated edge learning (FEEL) has drawn much attention as a privacy-preserving distributed learning framework for mobile edge networks.
In this work, we investigate a novel semi-decentralized FEEL (SD-FEEL) architecture where multiple edge servers collaborate to incorporate more data from edge devices in training.
arXiv Detail & Related papers (2021-12-09T07:39:31Z) - Edge-assisted Democratized Learning Towards Federated Analytics [67.44078999945722]
We show the hierarchical learning structure of the proposed edge-assisted democratized learning mechanism, namely Edge-DemLearn.
We also validate Edge-DemLearn as a flexible model training mechanism to build a distributed control and aggregation methodology in regions.
arXiv Detail & Related papers (2020-12-01T11:46:03Z) - Incentive Mechanism Design for Resource Sharing in Collaborative Edge
Learning [106.51930957941433]
In 5G and Beyond networks, Artificial Intelligence applications are expected to be increasingly ubiquitous.
This necessitates a paradigm shift from the current cloud-centric model training approach to the Edge Computing based collaborative learning scheme known as edge learning.
arXiv Detail & Related papers (2020-05-31T12:45:06Z) - Deep Learning for Ultra-Reliable and Low-Latency Communications in 6G
Networks [84.2155885234293]
We first summarize how to apply data-driven supervised deep learning and deep reinforcement learning in URLLC.
To address these open problems, we develop a multi-level architecture that enables device intelligence, edge intelligence, and cloud intelligence for URLLC.
arXiv Detail & Related papers (2020-02-22T14:38:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.