A Multi-Head Ensemble Multi-Task Learning Approach for Dynamical
Computation Offloading
- URL: http://arxiv.org/abs/2309.00907v1
- Date: Sat, 2 Sep 2023 11:01:16 GMT
- Title: A Multi-Head Ensemble Multi-Task Learning Approach for Dynamical
Computation Offloading
- Authors: Ruihuai Liang, Bo Yang, Zhiwen Yu, Xuelin Cao, Derrick Wing Kwan Ng,
Chau Yuen
- Abstract summary: We propose a multi-head ensemble multi-task learning (MEMTL) approach with a shared backbone and multiple prediction heads (PHs)
MEMTL outperforms benchmark methods in both the inference accuracy and mean square error without requiring additional training data.
- Score: 62.34538208323411
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Computation offloading has become a popular solution to support
computationally intensive and latency-sensitive applications by transferring
computing tasks to mobile edge servers (MESs) for execution, which is known as
mobile/multi-access edge computing (MEC). To improve the MEC performance, it is
required to design an optimal offloading strategy that includes offloading
decision (i.e., whether offloading or not) and computational resource
allocation of MEC. The design can be formulated as a mixed-integer nonlinear
programming (MINLP) problem, which is generally NP-hard and its effective
solution can be obtained by performing online inference through a well-trained
deep neural network (DNN) model. However, when the system environments change
dynamically, the DNN model may lose efficacy due to the drift of input
parameters, thereby decreasing the generalization ability of the DNN model. To
address this unique challenge, in this paper, we propose a multi-head ensemble
multi-task learning (MEMTL) approach with a shared backbone and multiple
prediction heads (PHs). Specifically, the shared backbone will be invariant
during the PHs training and the inferred results will be ensembled, thereby
significantly reducing the required training overhead and improving the
inference performance. As a result, the joint optimization problem for
offloading decision and resource allocation can be efficiently solved even in a
time-varying wireless environment. Experimental results show that the proposed
MEMTL outperforms benchmark methods in both the inference accuracy and mean
square error without requiring additional training data.
Related papers
- Towards A Flexible Accuracy-Oriented Deep Learning Module Inference Latency Prediction Framework for Adaptive Optimization Algorithms [0.49157446832511503]
This paper presents a framework for a deep learning module inference latency prediction framework.
It hosts a set of customizable input parameters to train multiple different RMs per DNN module.
It automatically selects a set of trained RMs leading to the highest possible overall prediction accuracy.
arXiv Detail & Related papers (2023-12-11T15:15:48Z) - Training Latency Minimization for Model-Splitting Allowed Federated Edge
Learning [16.8717239856441]
We propose a model-splitting allowed FL (SFL) framework to alleviate the shortage of computing power faced by clients in training deep neural networks (DNNs) using federated learning (FL)
Under the synchronized global update setting, the latency to complete a round of global training is determined by the maximum latency for the clients to complete a local training session.
To solve this mixed integer nonlinear programming problem, we first propose a regression method to fit the quantitative-relationship between the cut-layer and other parameters of an AI-model, and thus, transform the TLMP into a continuous problem.
arXiv Detail & Related papers (2023-07-21T12:26:42Z) - Scalable Resource Management for Dynamic MEC: An Unsupervised
Link-Output Graph Neural Network Approach [36.32772317151467]
Deep learning has been successfully adopted in mobile edge computing (MEC) to optimize task offloading and resource allocation.
The dynamics of edge networks raise two challenges in neural network (NN)-based optimization methods: low scalability and high training costs.
In this paper, a novel link-output GNN (LOGNN)-based resource management approach is proposed to flexibly optimize the resource allocation in MEC.
arXiv Detail & Related papers (2023-06-15T08:21:41Z) - Neural Stochastic Dual Dynamic Programming [99.80617899593526]
We introduce a trainable neural model that learns to map problem instances to a piece-wise linear value function.
$nu$-SDDP can significantly reduce problem solving cost without sacrificing solution quality.
arXiv Detail & Related papers (2021-12-01T22:55:23Z) - Adaptive Anomaly Detection for Internet of Things in Hierarchical Edge
Computing: A Contextual-Bandit Approach [81.5261621619557]
We propose an adaptive anomaly detection scheme with hierarchical edge computing (HEC)
We first construct multiple anomaly detection DNN models with increasing complexity, and associate each of them to a corresponding HEC layer.
Then, we design an adaptive model selection scheme that is formulated as a contextual-bandit problem and solved by using a reinforcement learning policy network.
arXiv Detail & Related papers (2021-08-09T08:45:47Z) - Efficient Model-Based Multi-Agent Mean-Field Reinforcement Learning [89.31889875864599]
We propose an efficient model-based reinforcement learning algorithm for learning in multi-agent systems.
Our main theoretical contributions are the first general regret bounds for model-based reinforcement learning for MFC.
We provide a practical parametrization of the core optimization problem.
arXiv Detail & Related papers (2021-07-08T18:01:02Z) - Partitioning sparse deep neural networks for scalable training and
inference [8.282177703075453]
State-of-the-art deep neural networks (DNNs) have significant computational and data management requirements.
Sparsification and pruning methods are shown to be effective in removing a large fraction of connections in DNNs.
The resulting sparse networks present unique challenges to further improve the computational efficiency of training and inference in deep learning.
arXiv Detail & Related papers (2021-04-23T20:05:52Z) - Optimization-driven Machine Learning for Intelligent Reflecting Surfaces
Assisted Wireless Networks [82.33619654835348]
Intelligent surface (IRS) has been employed to reshape the wireless channels by controlling individual scattering elements' phase shifts.
Due to the large size of scattering elements, the passive beamforming is typically challenged by the high computational complexity.
In this article, we focus on machine learning (ML) approaches for performance in IRS-assisted wireless networks.
arXiv Detail & Related papers (2020-08-29T08:39:43Z) - Deep Multi-Task Learning for Cooperative NOMA: System Design and
Principles [52.79089414630366]
We develop a novel deep cooperative NOMA scheme, drawing upon the recent advances in deep learning (DL)
We develop a novel hybrid-cascaded deep neural network (DNN) architecture such that the entire system can be optimized in a holistic manner.
arXiv Detail & Related papers (2020-07-27T12:38:37Z) - Computation Offloading in Multi-Access Edge Computing Networks: A
Multi-Task Learning Approach [7.203439085947118]
Multi-access edge computing (MEC) has already shown the potential in enabling mobile devices to bear the computation-intensive applications by offloading some tasks to a nearby access point (AP) integrated with a MEC server (MES)
However, due to the varying network conditions and limited computation resources of the MES, the offloading decisions taken by a mobile device and the computational resources allocated by the MES may not be efficiently achieved with the lowest cost.
We propose a dynamic offloading framework for the MEC network, in which the uplink non-orthogonal multiple access (NOMA) is used to enable multiple devices to upload their
arXiv Detail & Related papers (2020-06-29T15:11:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.