Incentive Mechanism Design for Resource Sharing in Collaborative Edge
Learning
- URL: http://arxiv.org/abs/2006.00511v1
- Date: Sun, 31 May 2020 12:45:06 GMT
- Title: Incentive Mechanism Design for Resource Sharing in Collaborative Edge
Learning
- Authors: Wei Yang Bryan Lim, Jer Shyuan Ng, Zehui Xiong, Dusit Niyato, Cyril
Leung, Chunyan Miao, Qiang Yang
- Abstract summary: In 5G and Beyond networks, Artificial Intelligence applications are expected to be increasingly ubiquitous.
This necessitates a paradigm shift from the current cloud-centric model training approach to the Edge Computing based collaborative learning scheme known as edge learning.
- Score: 106.51930957941433
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In 5G and Beyond networks, Artificial Intelligence applications are expected
to be increasingly ubiquitous. This necessitates a paradigm shift from the
current cloud-centric model training approach to the Edge Computing based
collaborative learning scheme known as edge learning, in which model training
is executed at the edge of the network. In this article, we first introduce the
principles and technologies of collaborative edge learning. Then, we establish
that a successful, scalable implementation of edge learning requires the
communication, caching, computation, and learning resources (3C-L) of end
devices and edge servers to be leveraged jointly in an efficient manner.
However, users may not consent to contribute their resources without receiving
adequate compensation. In consideration of the heterogeneity of edge nodes,
e.g., in terms of available computation resources, we discuss the challenges of
incentive mechanism design to facilitate resource sharing for edge learning.
Furthermore, we present a case study involving optimal auction design using
Deep Learning to price fresh data contributed for edge learning. The
performance evaluation shows the revenue maximizing properties of our proposed
auction over the benchmark schemes.
Related papers
- To Train or Not to Train: Balancing Efficiency and Training Cost in Deep Reinforcement Learning for Mobile Edge Computing [15.079887992932692]
We present a new algorithm to dynamically select when to train a Deep Reinforcement Learning (DRL) agent that allocates resources.
Our method is highly general, as it can be directly applied to any scenario involving a training overhead.
arXiv Detail & Related papers (2024-11-11T16:02:12Z) - A Unified Framework for Neural Computation and Learning Over Time [56.44910327178975]
Hamiltonian Learning is a novel unified framework for learning with neural networks "over time"
It is based on differential equations that: (i) can be integrated without the need of external software solvers; (ii) generalize the well-established notion of gradient-based learning in feed-forward and recurrent networks; (iii) open to novel perspectives.
arXiv Detail & Related papers (2024-09-18T14:57:13Z) - EdgeConvEns: Convolutional Ensemble Learning for Edge Intelligence [0.0]
Deep edge intelligence aims to deploy deep learning models that demand computationally expensive training in the edge network with limited computational power.
This study proposes a convolutional ensemble learning approach, coined EdgeConvEns, that facilitates training heterogeneous weak models on edge and learning to ensemble them where data on edge are heterogeneously distributed.
arXiv Detail & Related papers (2023-07-25T20:07:32Z) - Personalizing Federated Learning with Over-the-Air Computations [84.8089761800994]
Federated edge learning is a promising technology to deploy intelligence at the edge of wireless networks in a privacy-preserving manner.
Under such a setting, multiple clients collaboratively train a global generic model under the coordination of an edge server.
This paper presents a distributed training paradigm that employs analog over-the-air computation to address the communication bottleneck.
arXiv Detail & Related papers (2023-02-24T08:41:19Z) - Edge Computing for Semantic Communication Enabled Metaverse: An
Incentive Mechanism Design [72.27143788103245]
SemCom and edge computing are disruptive solutions to address emerging requirements of huge data communication, bandwidth efficiency and low latency data processing in Metaverse.
Deep learning (DL)-based auction has recently proposed as an incentive mechanism that maximizes the revenue while holding important economic properties.
We present the design of the DL-based auction for edge resource allocation in SemCom-enabled Metaverse.
arXiv Detail & Related papers (2022-12-13T10:29:41Z) - Edge-Cloud Polarization and Collaboration: A Comprehensive Survey [61.05059817550049]
We conduct a systematic review for both cloud and edge AI.
We are the first to set up the collaborative learning mechanism for cloud and edge modeling.
We discuss potentials and practical experiences of some on-going advanced edge AI topics.
arXiv Detail & Related papers (2021-11-11T05:58:23Z) - State-of-the-art Techniques in Deep Edge Intelligence [0.0]
Edge Intelligence (EI) has quickly emerged as a powerful alternative to enable learning using the concepts of Edge Computing.
In this article, we provide an overview of the major constraints in operationalizing DEI.
arXiv Detail & Related papers (2020-08-03T12:17:23Z) - OL4EL: Online Learning for Edge-cloud Collaborative Learning on
Heterogeneous Edges with Resource Constraints [18.051084376447655]
We propose a novel framework of 'learning to learn' for effective Edge Learning (EL) on heterogeneous edges with resource constraints.
We propose an Online Learning for EL (OL4EL) framework based on the budget-limited multi-armed bandit model.
OL4EL supports both synchronous and asynchronous learning patterns, and can be used for both supervised and unsupervised learning tasks.
arXiv Detail & Related papers (2020-04-22T03:51:58Z) - Deep Learning for Ultra-Reliable and Low-Latency Communications in 6G
Networks [84.2155885234293]
We first summarize how to apply data-driven supervised deep learning and deep reinforcement learning in URLLC.
To address these open problems, we develop a multi-level architecture that enables device intelligence, edge intelligence, and cloud intelligence for URLLC.
arXiv Detail & Related papers (2020-02-22T14:38:11Z) - Combining Federated and Active Learning for Communication-efficient
Distributed Failure Prediction in Aeronautics [0.0]
We present a new centralized distributed learning algorithm that relies on the learning paradigms of Active Learning and Federated Learning.
We evaluate this method on a public benchmark and show that its performances in terms of precision are very close to state-of-the-art performance level of non-distributed learning.
arXiv Detail & Related papers (2020-01-21T13:17:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.