TSNet-SAC: Leveraging Transformers for Efficient Task Scheduling
- URL: http://arxiv.org/abs/2307.07445v1
- Date: Fri, 16 Jun 2023 04:25:59 GMT
- Title: TSNet-SAC: Leveraging Transformers for Efficient Task Scheduling
- Authors: Ke Deng, Zhiyuan He, Hao Zhang, Haohan Lin, Desheng Wang
- Abstract summary: In future 6G Mobile Edge Computing (MEC), autopilot systems require the capability of multimodal processing data with strong interdependencies.
Traditional algorithms are inadequate for real-time scheduling due to their requirement for multiple iterations to derive the optimal scheme.
We propose a novel TSNet-SAC based on Transformer, that utilizes algorithms solely to guide the training of TSNet.
- Score: 6.873630624967785
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In future 6G Mobile Edge Computing (MEC), autopilot systems require the
capability of processing multimodal data with strong interdependencies.
However, traditional heuristic algorithms are inadequate for real-time
scheduling due to their requirement for multiple iterations to derive the
optimal scheme. We propose a novel TSNet-SAC based on Transformer, that
utilizes heuristic algorithms solely to guide the training of TSNet.
Additionally, a Sliding Augment Component (SAC) is introduced to enhance the
robustness and resolve algorithm defects. Furthermore, the Extender component
is designed to handle multi-scale training data and provide network
scalability, enabling TSNet to adapt to different access scenarios. Simulation
demonstrates that TSNet-SAC outperforms existing networks in accuracy and
robustness, achieving superior scheduling-making latency compared to heuristic
algorithms.
Related papers
- Synesthesia of Machines (SoM)-Enhanced Sub-THz ISAC Transmission for Air-Ground Network [15.847713094328286]
Integrated sensing and communication (ISAC) within sub-THz frequencies is crucial for future air-ground networks.<n>This paper introduces a multi-modal sensing fusion framework inspired by synesthesia of machine (SoM) to enhance sub-THz ISAC transmission.
arXiv Detail & Related papers (2025-06-15T12:30:14Z) - Distilled Pooling Transformer Encoder for Efficient Realistic Image Dehazing [0.0]
This paper proposes a lightweight neural network designed for realistic image dehazing, utilizing a Distilled Pooling Transformer, named DPTE-Net.
Experimental results on various benchmark datasets have shown that the proposed DPTE-Net can achieve competitive dehazing performance when compared to state-of-the-art methods.
arXiv Detail & Related papers (2024-12-18T14:16:23Z) - Task-Oriented Real-time Visual Inference for IoVT Systems: A Co-design Framework of Neural Networks and Edge Deployment [61.20689382879937]
Task-oriented edge computing addresses this by shifting data analysis to the edge.
Existing methods struggle to balance high model performance with low resource consumption.
We propose a novel co-design framework to optimize neural network architecture.
arXiv Detail & Related papers (2024-10-29T19:02:54Z) - AutoRNet: Automatically Optimizing Heuristics for Robust Network Design via Large Language Models [3.833708891059351]
AutoRNet is a framework that integrates large language models with evolutionary algorithms to generate robust networks.
We introduce an adaptive fitness function to balance convergence and diversity while maintaining degree distributions.
AutoRNet is evaluated on sparse and dense scale-free networks.
arXiv Detail & Related papers (2024-10-23T08:18:38Z) - TS-EoH: An Edge Server Task Scheduling Algorithm Based on Evolution of Heuristic [0.6827423171182154]
This paper introduces a novel task-scheduling approach based on EC theory and Evolutionary algorithms.
Experimental results show that our task-scheduling algorithm outperforms existing and traditional reinforcement learning methods.
arXiv Detail & Related papers (2024-09-04T10:00:32Z) - RACH Traffic Prediction in Massive Machine Type Communications [5.416701003120508]
This paper presents a machine learning-based framework tailored for forecasting bursty traffic in ALOHA networks.
We develop a new low-complexity online prediction algorithm that updates the states of the LSTM network by leveraging frequently collected data from the mMTC network.
We evaluate the performance of the proposed framework in a network with a single base station and thousands of devices organized into groups with distinct traffic-generating characteristics.
arXiv Detail & Related papers (2024-05-08T17:28:07Z) - Latency-aware Unified Dynamic Networks for Efficient Image Recognition [72.8951331472913]
LAUDNet is a framework to bridge the theoretical and practical efficiency gap in dynamic networks.
It integrates three primary dynamic paradigms-spatially adaptive computation, dynamic layer skipping, and dynamic channel skipping.
It can notably reduce the latency of models like ResNet by over 50% on platforms such as V100,3090, and TX2 GPUs.
arXiv Detail & Related papers (2023-08-30T10:57:41Z) - A Generalization of Continuous Relaxation in Structured Pruning [0.3277163122167434]
Trends indicate that deeper and larger neural networks with an increasing number of parameters achieve higher accuracy than smaller neural networks.
We generalize structured pruning with algorithms for network augmentation, pruning, sub-network collapse and removal.
The resulting CNN executes efficiently on GPU hardware without computationally expensive sparse matrix operations.
arXiv Detail & Related papers (2023-08-28T14:19:13Z) - Energy-Efficient On-Board Radio Resource Management for Satellite
Communications via Neuromorphic Computing [59.40731173370976]
We investigate the application of energy-efficient brain-inspired machine learning models for on-board radio resource management.
For relevant workloads, spiking neural networks (SNNs) implemented on Loihi 2 yield higher accuracy, while reducing power consumption by more than 100$times$ as compared to the CNN-based reference platform.
arXiv Detail & Related papers (2023-08-22T03:13:57Z) - Iterative Soft Shrinkage Learning for Efficient Image Super-Resolution [91.3781512926942]
Image super-resolution (SR) has witnessed extensive neural network designs from CNN to transformer architectures.
This work investigates the potential of network pruning for super-resolution iteration to take advantage of off-the-shelf network designs and reduce the underlying computational overhead.
We propose a novel Iterative Soft Shrinkage-Percentage (ISS-P) method by optimizing the sparse structure of a randomly network at each and tweaking unimportant weights with a small amount proportional to the magnitude scale on-the-fly.
arXiv Detail & Related papers (2023-03-16T21:06:13Z) - Efficient Sparsely Activated Transformers [0.34410212782758054]
Transformer-based neural networks have achieved state-of-the-art task performance in a number of machine learning domains.
Recent work has explored the integration of dynamic behavior into these networks in the form of mixture-of-expert layers.
We introduce a novel system named PLANER that takes an existing Transformer-based network and a user-defined latency target.
arXiv Detail & Related papers (2022-08-31T00:44:27Z) - Multi-Exit Semantic Segmentation Networks [78.44441236864057]
We propose a framework for converting state-of-the-art segmentation models to MESS networks.
specially trained CNNs that employ parametrised early exits along their depth to save during inference on easier samples.
We co-optimise the number, placement and architecture of the attached segmentation heads, along with the exit policy, to adapt to the device capabilities and application-specific requirements.
arXiv Detail & Related papers (2021-06-07T11:37:03Z) - Learning Frequency-aware Dynamic Network for Efficient Super-Resolution [56.98668484450857]
This paper explores a novel frequency-aware dynamic network for dividing the input into multiple parts according to its coefficients in the discrete cosine transform (DCT) domain.
In practice, the high-frequency part will be processed using expensive operations and the lower-frequency part is assigned with cheap operations to relieve the computation burden.
Experiments conducted on benchmark SISR models and datasets show that the frequency-aware dynamic network can be employed for various SISR neural architectures.
arXiv Detail & Related papers (2021-03-15T12:54:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.