Increasing Energy Efficiency of Massive-MIMO Network via Base Stations
Switching using Reinforcement Learning and Radio Environment Maps
- URL: http://arxiv.org/abs/2103.11891v1
- Date: Mon, 8 Mar 2021 21:57:13 GMT
- Title: Increasing Energy Efficiency of Massive-MIMO Network via Base Stations
Switching using Reinforcement Learning and Radio Environment Maps
- Authors: Marcin Hoffmann, Pawel Kryszkiewicz, Adrian Kliks
- Abstract summary: M-MIMO transmission results in high energy consumption growing with the number of antennas.
This paper investigates EE improvement through switching on/off underutilized BSs.
- Score: 3.781421673607642
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Energy Efficiency (EE) is of high importance while considering Massive
Multiple-Input Multiple-Output (M-MIMO) networks where base stations (BSs) are
equipped with an antenna array composed of up to hundreds of elements. M-MIMO
transmission, although highly spectrally efficient, results in high energy
consumption growing with the number of antennas. This paper investigates EE
improvement through switching on/off underutilized BSs. It is proposed to use
the location-aware approach, where data about an optimal active BSs set is
stored in a Radio Environment Map (REM). For efficient acquisition, processing
and utilization of the REM data, reinforcement learning (RL) algorithms are
used. State-of-the-art exploration/exploitation methods including e-greedy,
Upper Confidence Bound (UCB), and Gradient Bandit are evaluated. Then
analytical action filtering, and an REM-based Exploration Algorithm (REM-EA)
are proposed to improve the RL convergence time. Algorithms are evaluated using
an advanced, system-level simulator of an M-MIMO Heterogeneous Network (HetNet)
utilizing an accurate 3D-ray-tracing radio channel model. The proposed RL-based
BSs switching algorithm is proven to provide 70% gains in EE over a
state-of-the-art algorithm using an analytical heuristic. Moreover, the
proposed action filtering and REM-EA can reduce RL convergence time in relation
to the best-performing state-of-the-art exploration method by 60% and 83%,
respectively.
Related papers
- Age and Power Minimization via Meta-Deep Reinforcement Learning in UAV Networks [42.14963369042011]
This study examines a power-limited internet of things (IoT) network supported by a flying unmanned aerial vehicle (UAV) that collects data.
Our aim is to optimize the UAV flight trajectory and scheduling policy to minimize a varying AoI and transmission power combination.
arXiv Detail & Related papers (2025-01-24T16:17:53Z) - Traffic Learning and Proactive UAV Trajectory Planning for Data Uplink
in Markovian IoT Models [6.49537221266081]
In IoT networks, the traditional resource management schemes rely on a message exchange between the devices and the base station.
We present a novel learning-based framework that estimates the traffic arrival of IoT devices based on Markovian events.
We propose a deep reinforcement learning approach to optimize the optimal policy of each UAV.
arXiv Detail & Related papers (2024-01-24T21:57:55Z) - Joint User Association, Interference Cancellation and Power Control for
Multi-IRS Assisted UAV Communications [80.35959154762381]
Intelligent reflecting surface (IRS)-assisted unmanned aerial vehicle (UAV) communications are expected to alleviate the load of ground base stations in a cost-effective way.
Existing studies mainly focus on the deployment and resource allocation of a single IRS instead of multiple IRSs.
We propose a new optimization algorithm for joint IRS-user association, trajectory optimization of UAVs, successive interference cancellation (SIC) decoding order scheduling and power allocation.
arXiv Detail & Related papers (2023-12-08T01:57:10Z) - Multiagent Reinforcement Learning with an Attention Mechanism for
Improving Energy Efficiency in LoRa Networks [52.96907334080273]
As the network scale increases, the energy efficiency of LoRa networks decreases sharply due to severe packet collisions.
We propose a transmission parameter allocation algorithm based on multiagent reinforcement learning (MALoRa)
Simulation results demonstrate that MALoRa significantly improves the system EE compared with baseline algorithms.
arXiv Detail & Related papers (2023-09-16T11:37:23Z) - Beam Management Driven by Radio Environment Maps in O-RAN Architecture [2.0305676256390934]
M-MIMO is considered as one of the key technologies in 5G, and future 6G networks.
It is easier to implement an M-MIMO network exploiting a static set of beams, i.e., Grid of Beams (GoB)
Beam Management (BM) can be enhanced by taking into account historical knowledge about the radio environment.
The proposed solution is compliant with the Open Radio Access Network (O-RAN) architecture.
arXiv Detail & Related papers (2023-03-21T11:09:31Z) - MMTSA: Multimodal Temporal Segment Attention Network for Efficient Human
Activity Recognition [33.94582546667864]
Multimodal sensors provide complementary information to develop accurate machine-learning methods for human activity recognition.
This paper proposes an efficient multimodal neural architecture for HAR using an RGB camera and inertial measurement units (IMUs)
Using three well-established public datasets, we evaluated MMTSA's effectiveness and efficiency in HAR.
arXiv Detail & Related papers (2022-10-14T08:05:16Z) - Deep Reinforcement Learning Based Multidimensional Resource Management
for Energy Harvesting Cognitive NOMA Communications [64.1076645382049]
Combination of energy harvesting (EH), cognitive radio (CR), and non-orthogonal multiple access (NOMA) is a promising solution to improve energy efficiency.
In this paper, we study the spectrum, energy, and time resource management for deterministic-CR-NOMA IoT systems.
arXiv Detail & Related papers (2021-09-17T08:55:48Z) - Path Design and Resource Management for NOMA enhanced Indoor Intelligent
Robots [58.980293789967575]
A communication enabled indoor intelligent robots (IRs) service framework is proposed.
Lego modeling method is proposed, which can deterministically describe the indoor layout and channel state.
The investigated radio map is invoked as a virtual environment to train the reinforcement learning agent.
arXiv Detail & Related papers (2020-11-23T21:45:01Z) - Learning Centric Power Allocation for Edge Intelligence [84.16832516799289]
Edge intelligence has been proposed, which collects distributed data and performs machine learning at the edge.
This paper proposes a learning centric power allocation (LCPA) method, which allocates radio resources based on an empirical classification error model.
Experimental results show that the proposed LCPA algorithm significantly outperforms other power allocation algorithms.
arXiv Detail & Related papers (2020-07-21T07:02:07Z) - Millimeter Wave Communications with an Intelligent Reflector:
Performance Optimization and Distributional Reinforcement Learning [119.97450366894718]
A novel framework is proposed to optimize the downlink multi-user communication of a millimeter wave base station.
A channel estimation approach is developed to measure the channel state information (CSI) in real-time.
A distributional reinforcement learning (DRL) approach is proposed to learn the optimal IR reflection and maximize the expectation of downlink capacity.
arXiv Detail & Related papers (2020-02-24T22:18:54Z) - Reconfigurable Intelligent Surface Assisted Multiuser MISO Systems
Exploiting Deep Reinforcement Learning [21.770491711632832]
The reconfigurable intelligent surface (RIS) has been speculated as one of the key enabling technologies for the future six generation (6G) wireless communication systems.
In this paper, we investigate the joint design of transmit beamforming matrix at the base station and the phase shift matrix at the RIS, by leveraging recent advances in deep reinforcement learning (DRL)
The proposed algorithm is not only able to learn from the environment and gradually improve its behavior, but also obtains the comparable performance compared with two state-of-the-art benchmarks.
arXiv Detail & Related papers (2020-02-24T04:28:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.