Adaptive Dynamic Programming for Energy-Efficient Base Station Cell
Switching
- URL: http://arxiv.org/abs/2310.12999v2
- Date: Mon, 30 Oct 2023 16:13:26 GMT
- Title: Adaptive Dynamic Programming for Energy-Efficient Base Station Cell
Switching
- Authors: Junliang Luo, Yi Tian Xu, Di Wu, Michael Jenkin, Xue Liu, Gregory
Dudek
- Abstract summary: Energy saving in wireless networks is growing in importance due to increasing demand for evolving new-gen cellular networks.
We propose an approximate dynamic programming (ADP)-based method coupled with online optimization to switch on/off the cells of base stations to reduce network power consumption.
- Score: 19.520603265594108
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Energy saving in wireless networks is growing in importance due to increasing
demand for evolving new-gen cellular networks, environmental and regulatory
concerns, and potential energy crises arising from geopolitical tensions. In
this work, we propose an approximate dynamic programming (ADP)-based method
coupled with online optimization to switch on/off the cells of base stations to
reduce network power consumption while maintaining adequate Quality of Service
(QoS) metrics. We use a multilayer perceptron (MLP) given each state-action
pair to predict the power consumption to approximate the value function in ADP
for selecting the action with optimal expected power saved. To save the largest
possible power consumption without deteriorating QoS, we include another MLP to
predict QoS and a long short-term memory (LSTM) for predicting handovers,
incorporated into an online optimization algorithm producing an adaptive QoS
threshold for filtering cell switching actions based on the overall QoS
history. The performance of the method is evaluated using a practical network
simulator with various real-world scenarios with dynamic traffic patterns.
Related papers
- Optimizing Load Scheduling in Power Grids Using Reinforcement Learning and Markov Decision Processes [0.0]
This paper proposes a reinforcement learning (RL) approach to address the challenges of dynamic load scheduling.
Our results show that the RL-based method provides a robust and scalable solution for real-time load scheduling.
arXiv Detail & Related papers (2024-10-23T09:16:22Z) - Improved Q-learning based Multi-hop Routing for UAV-Assisted Communication [4.799822253865053]
This paper proposes a novel, Improved Q-learning-based Multi-hop Routing (IQMR) algorithm for optimal UAV-assisted communication systems.
Using Q(lambda) learning for routing decisions, IQMR substantially enhances energy efficiency and network data throughput.
arXiv Detail & Related papers (2024-08-17T06:24:31Z) - Predictive Handover Strategy in 6G and Beyond: A Deep and Transfer Learning Approach [11.44410301488549]
We propose a deep learning based algorithm for predicting the future serving cell.
Our framework complies with the O-RAN specifications and can be deployed in a Near-Real-Time RAN Intelligent Controller.
arXiv Detail & Related papers (2024-04-11T20:30:36Z) - Lyapunov-Driven Deep Reinforcement Learning for Edge Inference Empowered
by Reconfigurable Intelligent Surfaces [30.1512069754603]
We propose a novel algorithm for energy-efficient, low-latency, accurate inference at the wireless edge.
We consider a scenario where new data are continuously generated/collected by a set of devices and are handled through a dynamic queueing system.
arXiv Detail & Related papers (2023-05-18T12:46:42Z) - Differentially Private Deep Q-Learning for Pattern Privacy Preservation
in MEC Offloading [76.0572817182483]
attackers may eavesdrop on the offloading decisions to infer the edge server's (ES's) queue information and users' usage patterns.
We propose an offloading strategy which jointly minimizes the latency, ES's energy consumption, and task dropping rate, while preserving pattern privacy (PP)
We develop a Differential Privacy Deep Q-learning based Offloading (DP-DQO) algorithm to solve this problem while addressing the PP issue by injecting noise into the generated offloading decisions.
arXiv Detail & Related papers (2023-02-09T12:50:18Z) - Federated Learning for Energy-limited Wireless Networks: A Partial Model
Aggregation Approach [79.59560136273917]
limited communication resources, bandwidth and energy, and data heterogeneity across devices are main bottlenecks for federated learning (FL)
We first devise a novel FL framework with partial model aggregation (PMA)
The proposed PMA-FL improves 2.72% and 11.6% accuracy on two typical heterogeneous datasets.
arXiv Detail & Related papers (2022-04-20T19:09:52Z) - Collaborative Intelligent Reflecting Surface Networks with Multi-Agent
Reinforcement Learning [63.83425382922157]
Intelligent reflecting surface (IRS) is envisioned to be widely applied in future wireless networks.
In this paper, we investigate a multi-user communication system assisted by cooperative IRS devices with the capability of energy harvesting.
arXiv Detail & Related papers (2022-03-26T20:37:14Z) - Optimal Power Allocation for Rate Splitting Communications with Deep
Reinforcement Learning [61.91604046990993]
This letter introduces a novel framework to optimize the power allocation for users in a Rate Splitting Multiple Access network.
In the network, messages intended for users are split into different parts that are a single common part and respective private parts.
arXiv Detail & Related papers (2021-07-01T06:32:49Z) - Distributed Deep Reinforcement Learning for Functional Split Control in
Energy Harvesting Virtualized Small Cells [3.8779763612314624]
Mobile network operators (MNOs) are deploying dense infrastructures of small cells.
This increases the power consumption of mobile networks, thus impacting the environment.
In this paper, we consider a network of ambient small cells powered by energy harvesters and equipped with rechargeable batteries.
arXiv Detail & Related papers (2020-08-07T12:27:01Z) - Accelerating Deep Reinforcement Learning With the Aid of Partial Model:
Energy-Efficient Predictive Video Streaming [97.75330397207742]
Predictive power allocation is conceived for energy-efficient video streaming over mobile networks using deep reinforcement learning.
To handle the continuous state and action spaces, we resort to deep deterministic policy gradient (DDPG) algorithm.
Our simulation results show that the proposed policies converge to the optimal policy that is derived based on perfect large-scale channel prediction.
arXiv Detail & Related papers (2020-03-21T17:36:53Z) - Multi-Agent Meta-Reinforcement Learning for Self-Powered and Sustainable
Edge Computing Systems [87.4519172058185]
An effective energy dispatch mechanism for self-powered wireless networks with edge computing capabilities is studied.
A novel multi-agent meta-reinforcement learning (MAMRL) framework is proposed to solve the formulated problem.
Experimental results show that the proposed MAMRL model can reduce up to 11% non-renewable energy usage and by 22.4% the energy cost.
arXiv Detail & Related papers (2020-02-20T04:58:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.