Learning to Optimize Resource Assignment for Task Offloading in Mobile
Edge Computing
- URL: http://arxiv.org/abs/2203.09954v1
- Date: Tue, 15 Mar 2022 10:17:29 GMT
- Title: Learning to Optimize Resource Assignment for Task Offloading in Mobile
Edge Computing
- Authors: Yurong Qian, Jindan Xu, Shuhan Zhu, Wei Xu, Lisheng Fan, and George K.
Karagiannidis
- Abstract summary: We propose an intelligent BnB (IBnB) approach which applies deep learning (DL) to learn the pruning strategy of the BnB approach.
By using this learning scheme, the structure of the BnB approach ensures near-optimal performance and DL-based pruning strategy significantly reduces the complexity.
- Score: 35.69975917554333
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we consider a multiuser mobile edge computing (MEC) system,
where a mixed-integer offloading strategy is used to assist the resource
assignment for task offloading. Although the conventional branch and bound
(BnB) approach can be applied to solve this problem, a huge burden of
computational complexity arises which limits the application of BnB. To address
this issue, we propose an intelligent BnB (IBnB) approach which applies deep
learning (DL) to learn the pruning strategy of the BnB approach. By using this
learning scheme, the structure of the BnB approach ensures near-optimal
performance and meanwhile DL-based pruning strategy significantly reduces the
complexity. Numerical results verify that the proposed IBnB approach achieves
optimal performance with complexity reduced by over 80%.
Related papers
- A Multi-Head Ensemble Multi-Task Learning Approach for Dynamical
Computation Offloading [62.34538208323411]
We propose a multi-head ensemble multi-task learning (MEMTL) approach with a shared backbone and multiple prediction heads (PHs)
MEMTL outperforms benchmark methods in both the inference accuracy and mean square error without requiring additional training data.
arXiv Detail & Related papers (2023-09-02T11:01:16Z) - Stochastic Unrolled Federated Learning [85.6993263983062]
We introduce UnRolled Federated learning (SURF), a method that expands algorithm unrolling to federated learning.
Our proposed method tackles two challenges of this expansion, namely the need to feed whole datasets to the unrolleds and the decentralized nature of federated learning.
arXiv Detail & Related papers (2023-05-24T17:26:22Z) - Planning to the Information Horizon of BAMDPs via Epistemic State
Abstraction [27.33232096515561]
Bayes-Adaptive Markov Decision Process (BAMDP) formalism pursues the Bayes-optimal solution to the exploration-exploitation trade-off in reinforcement learning.
Much of the literature has focused on developing suitable approximation algorithms.
We first define, under mild structural assumptions, a complexity measure for BAMDP planning.
We then conclude by introducing a specific form of state abstraction with the potential to reduce BAMDP complexity and gives rise to a computationally-tractable, approximate planning algorithm.
arXiv Detail & Related papers (2022-10-30T16:30:23Z) - Quant-BnB: A Scalable Branch-and-Bound Method for Optimal Decision Trees
with Continuous Features [5.663538370244174]
We present a new discrete optimization method based on branch-and-bound (BnB) to obtain optimal decision trees.
Our proposed algorithm Quant-BnB shows significant speedups compared to existing approaches for shallow optimal trees on various real datasets.
arXiv Detail & Related papers (2022-06-23T17:19:29Z) - ES-Based Jacobian Enables Faster Bilevel Optimization [53.675623215542515]
Bilevel optimization (BO) has arisen as a powerful tool for solving many modern machine learning problems.
Existing gradient-based methods require second-order derivative approximations via Jacobian- or/and Hessian-vector computations.
We propose a novel BO algorithm, which adopts Evolution Strategies (ES) based method to approximate the response Jacobian matrix in the hypergradient of BO.
arXiv Detail & Related papers (2021-10-13T19:36:50Z) - Improved Branch and Bound for Neural Network Verification via Lagrangian
Decomposition [161.09660864941603]
We improve the scalability of Branch and Bound (BaB) algorithms for formally proving input-output properties of neural networks.
We present a novel activation-based branching strategy and a BaB framework, named Branch and Dual Network Bound (BaDNB)
BaDNB outperforms previous complete verification systems by a large margin, cutting average verification times by factors up to 50 on adversarial properties.
arXiv Detail & Related papers (2021-04-14T09:22:42Z) - Adaptive Sampling for Best Policy Identification in Markov Decision
Processes [79.4957965474334]
We investigate the problem of best-policy identification in discounted Markov Decision (MDPs) when the learner has access to a generative model.
The advantages of state-of-the-art algorithms are discussed and illustrated.
arXiv Detail & Related papers (2020-09-28T15:22:24Z) - Block Layer Decomposition schemes for training Deep Neural Networks [0.0]
Deep Feedforward Networks (DFNNs) weights estimation relies on a very large non-Coordinate optimization problem that may have many local (no global) minimizers, saddle points and large plateaus.
As a consequence, optimization algorithms can be attracted toward local minimizers which can lead to bad solutions or can slow down the optimization process.
arXiv Detail & Related papers (2020-03-18T09:53:40Z) - Sparse Optimization for Green Edge AI Inference [28.048770388766716]
We present a joint inference task selection and downlink beamforming strategy to achieve energy-efficient edge AI inference.
By exploiting the inherent connections between the set of task selection and group sparsity transmit beamforming vector, we reformulate the optimization as a group sparse beamforming problem.
We establish the global convergence analysis and provide the ergodic worst-case convergence rate for this algorithm.
arXiv Detail & Related papers (2020-02-24T05:21:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.