Heterogeneous Resource Allocation with Multi-task Learning for Wireless Networks
- URL: http://arxiv.org/abs/2502.10027v1
- Date: Fri, 14 Feb 2025 09:13:33 GMT
- Title: Heterogeneous Resource Allocation with Multi-task Learning for Wireless Networks
- Authors: Nikos A. Mitsiou, Pavlos S. Bouzinis, Panagiotis G. Sarigiannidis, George K. Karagiannidis,
- Abstract summary: We propose a multi-task learning (MTL) framework to enable a single deep neural network (DNN) to jointly solve a range of diverse optimization problems.
In this framework, optimization problems with varying dimensionality values, objectives, and constraints are treated as distinct tasks.
Numerical results demonstrate the efficiency of the proposed MTL approach in solving diverse optimization problems.
- Score: 22.52809431518314
- License:
- Abstract: The optimal solution to an optimization problem depends on the problem's objective function, constraints, and size. While deep neural networks (DNNs) have proven effective in solving optimization problems, changes in the problem's size, objectives, or constraints often require adjustments to the DNN architecture to maintain effectiveness, or even retraining a new DNN from scratch. Given the dynamic nature of wireless networks, which involve multiple and diverse objectives that can have conflicting requirements and constraints, we propose a multi-task learning (MTL) framework to enable a single DNN to jointly solve a range of diverse optimization problems. In this framework, optimization problems with varying dimensionality values, objectives, and constraints are treated as distinct tasks. To jointly address these tasks, we propose a conditional computation-based MTL approach with routing. The multi-task DNN consists of two components, the base DNN (bDNN), which is the single DNN used to extract the solutions for all considered optimization problems, and the routing DNN (rDNN), which manages which nodes and layers of the bDNN to be used during the forward propagation of each task. The output of the rDNN is a binary vector which is multiplied with all bDNN's weights during the forward propagation, creating a unique computational path through the bDNN for each task. This setup allows the tasks to either share parameters or use independent ones, with the decision controlled by the rDNN. The proposed framework supports both supervised and unsupervised learning scenarios. Numerical results demonstrate the efficiency of the proposed MTL approach in solving diverse optimization problems. In contrast, benchmark DNNs lacking the rDNN mechanism were unable to achieve similar levels of performance, highlighting the effectiveness of the proposed architecture.
Related papers
- DNN Partitioning, Task Offloading, and Resource Allocation in Dynamic Vehicular Networks: A Lyapunov-Guided Diffusion-Based Reinforcement Learning Approach [49.56404236394601]
We formulate the problem of joint DNN partitioning, task offloading, and resource allocation in Vehicular Edge Computing.
Our objective is to minimize the DNN-based task completion time while guaranteeing the system stability over time.
We propose a Multi-Agent Diffusion-based Deep Reinforcement Learning (MAD2RL) algorithm, incorporating the innovative use of diffusion models.
arXiv Detail & Related papers (2024-06-11T06:31:03Z) - A Multi-Head Ensemble Multi-Task Learning Approach for Dynamical
Computation Offloading [62.34538208323411]
We propose a multi-head ensemble multi-task learning (MEMTL) approach with a shared backbone and multiple prediction heads (PHs)
MEMTL outperforms benchmark methods in both the inference accuracy and mean square error without requiring additional training data.
arXiv Detail & Related papers (2023-09-02T11:01:16Z) - Scalable Resource Management for Dynamic MEC: An Unsupervised
Link-Output Graph Neural Network Approach [36.32772317151467]
Deep learning has been successfully adopted in mobile edge computing (MEC) to optimize task offloading and resource allocation.
The dynamics of edge networks raise two challenges in neural network (NN)-based optimization methods: low scalability and high training costs.
In this paper, a novel link-output GNN (LOGNN)-based resource management approach is proposed to flexibly optimize the resource allocation in MEC.
arXiv Detail & Related papers (2023-06-15T08:21:41Z) - Power Control with QoS Guarantees: A Differentiable Projection-based
Unsupervised Learning Framework [14.518558523319518]
Deep neural networks (DNNs) are emerging as a potential solution to solve NP-hard wireless resource allocation problems.
We propose a novel unsupervised learning framework to solve the classical power control problem in a multi-user channel.
We show that the proposed solutions not only improve the data rate but also achieve zero constraint violation probability, compared to the existing computations.
arXiv Detail & Related papers (2023-05-31T14:11:51Z) - Ensuring DNN Solution Feasibility for Optimization Problems with Convex
Constraints and Its Application to DC Optimal Power Flow Problems [25.791128241015684]
Ensuring solution feasibility is a key challenge in developing Deep Neural Network (DNN) schemes for solving constrained optimization problems, due to prediction errors.
We propose a preventive learning'' framework to guarantee DNN solution for problems with convex and general objective functions without post-versaive inequality constraints.
We apply framework to develop DeepOPF+ for solving essential DC optimal power flow problems in grid operation.
arXiv Detail & Related papers (2021-12-15T13:05:59Z) - RoMA: Robust Model Adaptation for Offline Model-based Optimization [115.02677045518692]
We consider the problem of searching an input maximizing a black-box objective function given a static dataset of input-output queries.
A popular approach to solving this problem is maintaining a proxy model that approximates the true objective function.
Here, the main challenge is how to avoid adversarially optimized inputs during the search.
arXiv Detail & Related papers (2021-10-27T05:37:12Z) - dNNsolve: an efficient NN-based PDE solver [62.997667081978825]
We introduce dNNsolve, that makes use of dual Neural Networks to solve ODEs/PDEs.
We show that dNNsolve is capable of solving a broad range of ODEs/PDEs in 1, 2 and 3 spacetime dimensions.
arXiv Detail & Related papers (2021-03-15T19:14:41Z) - Evolving Multi-Resolution Pooling CNN for Monaural Singing Voice
Separation [40.170868770930774]
Monaural Singing Voice Separation (MSVS) is a challenging task and has been studied for decades.
Deep neural networks (DNNs) are the current state-of-the-art methods for MSVS.
We introduce a Neural Architecture Search (NAS) method to the structure design of DNNs for MSVS.
arXiv Detail & Related papers (2020-08-03T12:09:42Z) - Self-Directed Online Machine Learning for Topology Optimization [58.920693413667216]
Self-directed Online Learning Optimization integrates Deep Neural Network (DNN) with Finite Element Method (FEM) calculations.
Our algorithm was tested by four types of problems including compliance minimization, fluid-structure optimization, heat transfer enhancement and truss optimization.
It reduced the computational time by 2 5 orders of magnitude compared with directly using methods, and outperformed all state-of-the-art algorithms tested in our experiments.
arXiv Detail & Related papers (2020-02-04T20:00:28Z) - BLK-REW: A Unified Block-based DNN Pruning Framework using Reweighted
Regularization Method [69.49386965992464]
We propose a new block-based pruning framework that comprises a general and flexible structured pruning dimension as well as a powerful and efficient reweighted regularization method.
Our framework is universal, which can be applied to both CNNs and RNNs, implying complete support for the two major kinds ofintensive computation layers.
It is the first time that the weight pruning framework achieves universal coverage for both CNNs and RNNs with real-time mobile acceleration and no accuracy compromise.
arXiv Detail & Related papers (2020-01-23T03:30:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.