Coordinates-based Resource Allocation Through Supervised Machine
Learning
- URL: http://arxiv.org/abs/2005.06509v1
- Date: Wed, 13 May 2020 18:33:23 GMT
- Title: Coordinates-based Resource Allocation Through Supervised Machine
Learning
- Authors: Sahar Imtiaz, Sebastian Schiessl, Georgios P. Koudouridis and James
Gross
- Abstract summary: We propose a coordinates-based resource allocation scheme using supervised machine learning techniques.
The proposed scheme performs consistently well with realistic-system simulation, requiring only 4 s of training time.
- Score: 14.014514995022182
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Appropriate allocation of system resources is essential for meeting the
increased user-traffic demands in the next generation wireless technologies.
Traditionally, the system relies on channel state information (CSI) of the
users for optimizing the resource allocation, which becomes costly for
fast-varying channel conditions. Considering that future wireless technologies
will be based on dense network deployment, where the mobile terminals are in
line-of-sight of the transmitters, the position information of terminals
provides an alternative to estimate the channel condition. In this work, we
propose a coordinates-based resource allocation scheme using supervised machine
learning techniques, and investigate how efficiently this scheme performs in
comparison to the traditional approach under various propagation conditions. We
consider a simplistic system set up as a first step, where a single transmitter
serves a single mobile user. The performance results show that the
coordinates-based resource allocation scheme achieves a performance very close
to the CSI-based scheme, even when the available coordinates of terminals are
erroneous. The proposed scheme performs consistently well with realistic-system
simulation, requiring only 4 s of training time, and the appropriate resource
allocation is predicted in less than 90 microseconds with a learnt model of
size less than 1 kB.
Related papers
- Hypergame Theory for Decentralized Resource Allocation in Multi-user Semantic Communications [60.63472821600567]
A novel framework for decentralized computing and communication resource allocation in multiuser SC systems is proposed.
The challenge of efficiently allocating communication and computing resources is addressed through the application of Stackelberg hyper game theory.
Simulation results show that the proposed Stackelberg hyper game results in efficient usage of communication and computing resources.
arXiv Detail & Related papers (2024-09-26T15:55:59Z) - On the Effective Usage of Priors in RSS-based Localization [56.68864078417909]
We propose a Received Signal Strength (RSS) fingerprint and convolutional neural network-based algorithm, LocUNet.
In this paper, we study the localization problem in dense urban settings.
We first recognize LocUNet's ability to learn the underlying prior distribution of the Rx position or Rx and transmitter (Tx) association preferences from the training data, and attribute its high performance to these.
arXiv Detail & Related papers (2022-11-28T00:31:02Z) - Distributed Resource Allocation for URLLC in IIoT Scenarios: A
Multi-Armed Bandit Approach [17.24490186427519]
This paper addresses the problem of enabling inter-machine Ultra-Reliable Low-Latency Communication (URLLC) in future 6G Industrial Internet of Things (IIoT) networks.
We study a distributed, user-centric scheme based on machine learning in which User Equipments autonomously select their uplink radio resources.
Using simulation, we demonstrate that a Multi-Armed Bandit (MAB) approach represents a desirable solution to allocate resources with URLLC in mind.
arXiv Detail & Related papers (2022-11-22T11:50:05Z) - A Learning-Based Fast Uplink Grant for Massive IoT via Support Vector
Machines and Long Short-Term Memory [8.864453148536057]
3IoT introduced the need to use fast uplink grant (FUG) allocation in order to reduce latency and increase reliability for smart internet-of-things (mMTC) applications.
We propose a novel FUG allocation based on support machine scheduler (SVM)
Second, LSTM architecture is used for traffic prediction and correction techniques to overcome prediction errors.
arXiv Detail & Related papers (2021-08-02T11:33:02Z) - LoRD-Net: Unfolded Deep Detection Network with Low-Resolution Receivers [104.01415343139901]
We propose a deep detector entitled LoRD-Net for recovering information symbols from one-bit measurements.
LoRD-Net has a task-based architecture dedicated to recovering the underlying signal of interest.
We evaluate the proposed receiver architecture for one-bit signal recovery in wireless communications.
arXiv Detail & Related papers (2021-02-05T04:26:05Z) - Reconfigurable Intelligent Surface Assisted Mobile Edge Computing with
Heterogeneous Learning Tasks [53.1636151439562]
Mobile edge computing (MEC) provides a natural platform for AI applications.
We present an infrastructure to perform machine learning tasks at an MEC with the assistance of a reconfigurable intelligent surface (RIS)
Specifically, we minimize the learning error of all participating users by jointly optimizing transmit power of mobile users, beamforming vectors of the base station, and the phase-shift matrix of the RIS.
arXiv Detail & Related papers (2020-12-25T07:08:50Z) - Similarity-based prediction for channel mapping and user positioning [0.0]
In a wireless network, gathering information about mobile users based only on uplink channel measurements is an interesting challenge.
In this paper, a supervised machine learning approach addressing these tasks in an unified way is proposed.
It relies on a labeled database that can be acquired in a simple way by the base station while operating.
arXiv Detail & Related papers (2020-12-02T11:30:51Z) - Deep Learning-based Resource Allocation For Device-to-Device
Communication [66.74874646973593]
We propose a framework for the optimization of the resource allocation in multi-channel cellular systems with device-to-device (D2D) communication.
A deep learning (DL) framework is proposed, where the optimal resource allocation strategy for arbitrary channel conditions is approximated by deep neural network (DNN) models.
Our simulation results confirm that near-optimal performance can be attained with low time, which underlines the real-time capability of the proposed scheme.
arXiv Detail & Related papers (2020-11-25T14:19:23Z) - A Compressive Sensing Approach for Federated Learning over Massive MIMO
Communication Systems [82.2513703281725]
Federated learning is a privacy-preserving approach to train a global model at a central server by collaborating with wireless devices.
We present a compressive sensing approach for federated learning over massive multiple-input multiple-output communication systems.
arXiv Detail & Related papers (2020-03-18T05:56:27Z) - Fast-Fourier-Forecasting Resource Utilisation in Distributed Systems [10.219353459640137]
We present a communication-efficient data collection mechanism for distributed computing systems.
We also propose a deep learning architecture using complex Gated Recurrent Units to forecast resource utilisation.
Our approach resolves challenges encountered in resource provisioning frameworks and can be applied to other forecasting problems.
arXiv Detail & Related papers (2020-01-13T14:31:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.