Distributed Deep Reinforcement Learning for Functional Split Control in
Energy Harvesting Virtualized Small Cells
- URL: http://arxiv.org/abs/2008.04105v1
- Date: Fri, 7 Aug 2020 12:27:01 GMT
- Title: Distributed Deep Reinforcement Learning for Functional Split Control in
Energy Harvesting Virtualized Small Cells
- Authors: Dagnachew Azene Temesgene, Marco Miozzo, Deniz G\"und\"uz and Paolo
Dini
- Abstract summary: Mobile network operators (MNOs) are deploying dense infrastructures of small cells.
This increases the power consumption of mobile networks, thus impacting the environment.
In this paper, we consider a network of ambient small cells powered by energy harvesters and equipped with rechargeable batteries.
- Score: 3.8779763612314624
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To meet the growing quest for enhanced network capacity, mobile network
operators (MNOs) are deploying dense infrastructures of small cells. This, in
turn, increases the power consumption of mobile networks, thus impacting the
environment. As a result, we have seen a recent trend of powering mobile
networks with harvested ambient energy to achieve both environmental and cost
benefits. In this paper, we consider a network of virtualized small cells
(vSCs) powered by energy harvesters and equipped with rechargeable batteries,
which can opportunistically offload baseband (BB) functions to a grid-connected
edge server depending on their energy availability. We formulate the
corresponding grid energy and traffic drop rate minimization problem, and
propose a distributed deep reinforcement learning (DDRL) solution. Coordination
among vSCs is enabled via the exchange of battery state information. The
evaluation of the network performance in terms of grid energy consumption and
traffic drop rate confirms that enabling coordination among the vSCs via
knowledge exchange achieves a performance close to the optimal. Numerical
results also confirm that the proposed DDRL solution provides higher network
performance, better adaptation to the changing environment, and higher cost
savings with respect to a tabular multi-agent reinforcement learning (MRL)
solution used as a benchmark.
Related papers
- SkyCharge: Deploying Unmanned Aerial Vehicles for Dynamic Load
Optimization in Solar Small Cell 5G Networks [15.532817648696408]
We propose a novel user load transfer approach using airborne base stations mounted on drones for reliable and secure power redistribution.
Depending on the user density and the availability of an aerial BS, the energy requirement of a cell with an energy deficit is accommodated by migrating the aerial BS from a high-energy to a low-energy cell.
The proposed algorithm reduces power outages at BSs and maintains consistent throughput stability, thereby demonstrating its capability to boost the reliability and robustness of wireless communication systems.
arXiv Detail & Related papers (2023-11-21T19:17:39Z) - Adaptive Dynamic Programming for Energy-Efficient Base Station Cell
Switching [19.520603265594108]
Energy saving in wireless networks is growing in importance due to increasing demand for evolving new-gen cellular networks.
We propose an approximate dynamic programming (ADP)-based method coupled with online optimization to switch on/off the cells of base stations to reduce network power consumption.
arXiv Detail & Related papers (2023-10-05T14:50:12Z) - Reducing the Environmental Impact of Wireless Communication via
Probabilistic Machine Learning [2.0610589722626074]
Communication related energy consumption is high and is expected to grow in future networks in spite of anticipated efficiency gains in 6G.
We present summaries of two problems, from both current and next generation network specifications, where probabilistic inference methods were used to great effect.
We are able to safely reduce the energy consumption of existing hardware on a live communications network by $11%$ whilst maintaining operator specified performance envelopes.
arXiv Detail & Related papers (2023-09-19T09:48:40Z) - Distributed Energy Management and Demand Response in Smart Grids: A
Multi-Agent Deep Reinforcement Learning Framework [53.97223237572147]
This paper presents a multi-agent Deep Reinforcement Learning (DRL) framework for autonomous control and integration of renewable energy resources into smart power grid systems.
In particular, the proposed framework jointly considers demand response (DR) and distributed energy management (DEM) for residential end-users.
arXiv Detail & Related papers (2022-11-29T01:18:58Z) - Threshold-Based Data Exclusion Approach for Energy-Efficient Federated
Edge Learning [4.25234252803357]
Federated edge learning (FEEL) is a promising distributed learning technique for next-generation wireless networks.
FEEL might significantly shorten energy-constrained participating devices' lifetime due to the power consumed during the model training round.
This paper proposes a novel approach that endeavors to minimize computation and communication energy consumption during FEEL rounds.
arXiv Detail & Related papers (2021-03-30T13:34:40Z) - To Talk or to Work: Flexible Communication Compression for Energy
Efficient Federated Learning over Heterogeneous Mobile Edge Devices [78.38046945665538]
federated learning (FL) over massive mobile edge devices opens new horizons for numerous intelligent mobile applications.
FL imposes huge communication and computation burdens on participating devices due to periodical global synchronization and continuous local training.
We develop a convergence-guaranteed FL algorithm enabling flexible communication compression.
arXiv Detail & Related papers (2020-12-22T02:54:18Z) - Communication Efficient Federated Learning with Energy Awareness over
Wireless Networks [51.645564534597625]
In federated learning (FL), the parameter server and the mobile devices share the training parameters over wireless links.
We adopt the idea of SignSGD in which only the signs of the gradients are exchanged.
Two optimization problems are formulated and solved, which optimize the learning performance.
Considering that the data may be distributed across the mobile devices in a highly uneven fashion in FL, a sign-based algorithm is proposed.
arXiv Detail & Related papers (2020-04-15T21:25:13Z) - Lightwave Power Transfer for Federated Learning-based Wireless Networks [34.434349833489954]
Federated Learning (FL) has been recently presented as a new technique for training shared machine learning models in a distributed manner.
implementing FL in wireless networks may significantly reduce the lifetime of energy-constrained mobile devices.
We propose a novel approach at the physical layer based on the application of lightwave power transfer in the FL-based wireless network.
arXiv Detail & Related papers (2020-04-11T16:27:17Z) - Risk-Aware Energy Scheduling for Edge Computing with Microgrid: A
Multi-Agent Deep Reinforcement Learning Approach [82.6692222294594]
We study a risk-aware energy scheduling problem for a microgrid-powered MEC network.
We derive the solution by applying a multi-agent deep reinforcement learning (MADRL)-based advantage actor-critic (A3C) algorithm with shared neural networks.
arXiv Detail & Related papers (2020-02-21T02:14:38Z) - Multi-Agent Meta-Reinforcement Learning for Self-Powered and Sustainable
Edge Computing Systems [87.4519172058185]
An effective energy dispatch mechanism for self-powered wireless networks with edge computing capabilities is studied.
A novel multi-agent meta-reinforcement learning (MAMRL) framework is proposed to solve the formulated problem.
Experimental results show that the proposed MAMRL model can reduce up to 11% non-renewable energy usage and by 22.4% the energy cost.
arXiv Detail & Related papers (2020-02-20T04:58:07Z) - Wireless Power Control via Counterfactual Optimization of Graph Neural
Networks [124.89036526192268]
We consider the problem of downlink power control in wireless networks, consisting of multiple transmitter-receiver pairs communicating over a single shared wireless medium.
To mitigate the interference among concurrent transmissions, we leverage the network topology to create a graph neural network architecture.
We then use an unsupervised primal-dual counterfactual optimization approach to learn optimal power allocation decisions.
arXiv Detail & Related papers (2020-02-17T07:54:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.