Deep Reinforcement Learning for Community Battery Scheduling under
Uncertainties of Load, PV Generation, and Energy Prices
- URL: http://arxiv.org/abs/2312.03008v1
- Date: Mon, 4 Dec 2023 13:45:17 GMT
- Title: Deep Reinforcement Learning for Community Battery Scheduling under
Uncertainties of Load, PV Generation, and Energy Prices
- Authors: Jiarong Fan, Hao Wang
- Abstract summary: This paper presents a deep reinforcement learning (RL) strategy to schedule a community battery system in the presence of uncertainties.
We position the community battery to play a versatile role, in integrating local PV energy, reducing peak load, and exploiting energy price fluctuations for arbitrage.
- Score: 5.694872363688119
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In response to the growing uptake of distributed energy resources (DERs),
community batteries have emerged as a promising solution to support renewable
energy integration, reduce peak load, and enhance grid reliability. This paper
presents a deep reinforcement learning (RL) strategy, centered around the soft
actor-critic (SAC) algorithm, to schedule a community battery system in the
presence of uncertainties, such as solar photovoltaic (PV) generation, local
demand, and real-time energy prices. We position the community battery to play
a versatile role, in integrating local PV energy, reducing peak load, and
exploiting energy price fluctuations for arbitrage, thereby minimizing the
system cost. To improve exploration and convergence during RL training, we
utilize the noisy network technique. This paper conducts a comparative study of
different RL algorithms, including proximal policy optimization (PPO) and deep
deterministic policy gradient (DDPG) algorithms, to evaluate their
effectiveness in the community battery scheduling problem. The results
demonstrate the potential of RL in addressing community battery scheduling
challenges and show that the SAC algorithm achieves the best performance
compared to RL and optimization benchmarks.
Related papers
- Optimizing Load Scheduling in Power Grids Using Reinforcement Learning and Markov Decision Processes [0.0]
This paper proposes a reinforcement learning (RL) approach to address the challenges of dynamic load scheduling.
Our results show that the RL-based method provides a robust and scalable solution for real-time load scheduling.
arXiv Detail & Related papers (2024-10-23T09:16:22Z) - Control Policy Correction Framework for Reinforcement Learning-based Energy Arbitrage Strategies [4.950434218152639]
We propose a new RL-based control framework for batteries to obtain a safe energy arbitrage strategy in the imbalance settlement mechanism.
We use the Belgian imbalance price of 2023 to evaluate the performance of our proposed framework.
arXiv Detail & Related papers (2024-04-29T16:03:21Z) - Multi-market Energy Optimization with Renewables via Reinforcement
Learning [1.0878040851638]
This paper introduces a deep reinforcement learning framework for optimizing the operations of power plants pairing renewable energy with storage.
The framework handles complexities such as time coupling by storage devices, uncertainty in renewable generation and energy prices, and non-linear storage models.
It utilizes RL to incorporate complex storage models, overcoming restrictions of optimization-based methods that require convex and differentiable component models.
arXiv Detail & Related papers (2023-06-13T21:35:24Z) - Sustainable AIGC Workload Scheduling of Geo-Distributed Data Centers: A
Multi-Agent Reinforcement Learning Approach [48.18355658448509]
Recent breakthroughs in generative artificial intelligence have triggered a surge in demand for machine learning training, which poses significant cost burdens and environmental challenges due to its substantial energy consumption.
Scheduling training jobs among geographically distributed cloud data centers unveils the opportunity to optimize the usage of computing capacity powered by inexpensive and low-carbon energy.
We propose an algorithm based on multi-agent reinforcement learning and actor-critic methods to learn the optimal collaborative scheduling strategy through interacting with a cloud system built with real-life workload patterns, energy prices, and carbon intensities.
arXiv Detail & Related papers (2023-04-17T02:12:30Z) - Combating Uncertainties in Wind and Distributed PV Energy Sources Using
Integrated Reinforcement Learning and Time-Series Forecasting [2.774390661064003]
unpredictability of renewable energy generation poses challenges for electricity providers and utility companies.
We propose a novel framework with two objectives: (i) combating uncertainty of renewable energy in smart grid by leveraging time-series forecasting with Long-Short Term Memory (LSTM) solutions, and (ii) establishing distributed and dynamic decision-making framework with multi-agent reinforcement learning using Deep Deterministic Policy Gradient (DDPG) algorithm.
arXiv Detail & Related papers (2023-02-27T19:12:50Z) - Optimal Planning of Hybrid Energy Storage Systems using Curtailed
Renewable Energy through Deep Reinforcement Learning [0.0]
We propose a sophisticated deep reinforcement learning (DRL) methodology with a policy-based algorithm to plan energy storage systems (ESS)
A quantitative performance comparison proved that the DRL agent outperforms the scenario-based optimization (SO) algorithm.
The corresponding results confirmed that the DRL agent learns the way like what a human expert would do, suggesting reliable application of the proposed methodology.
arXiv Detail & Related papers (2022-12-12T02:24:50Z) - Distributed Energy Management and Demand Response in Smart Grids: A
Multi-Agent Deep Reinforcement Learning Framework [53.97223237572147]
This paper presents a multi-agent Deep Reinforcement Learning (DRL) framework for autonomous control and integration of renewable energy resources into smart power grid systems.
In particular, the proposed framework jointly considers demand response (DR) and distributed energy management (DEM) for residential end-users.
arXiv Detail & Related papers (2022-11-29T01:18:58Z) - Deep Reinforcement Learning Based Multidimensional Resource Management
for Energy Harvesting Cognitive NOMA Communications [64.1076645382049]
Combination of energy harvesting (EH), cognitive radio (CR), and non-orthogonal multiple access (NOMA) is a promising solution to improve energy efficiency.
In this paper, we study the spectrum, energy, and time resource management for deterministic-CR-NOMA IoT systems.
arXiv Detail & Related papers (2021-09-17T08:55:48Z) - Optimizing a domestic battery and solar photovoltaic system with deep
reinforcement learning [69.68068088508505]
A lowering in the cost of batteries and solar PV systems has led to a high uptake of solar battery home systems.
In this work, we use the deep deterministic policy algorithm to optimise the charging and discharging behaviour of a battery within such a system.
arXiv Detail & Related papers (2021-09-10T10:59:14Z) - Risk-Aware Energy Scheduling for Edge Computing with Microgrid: A
Multi-Agent Deep Reinforcement Learning Approach [82.6692222294594]
We study a risk-aware energy scheduling problem for a microgrid-powered MEC network.
We derive the solution by applying a multi-agent deep reinforcement learning (MADRL)-based advantage actor-critic (A3C) algorithm with shared neural networks.
arXiv Detail & Related papers (2020-02-21T02:14:38Z) - Multi-Agent Meta-Reinforcement Learning for Self-Powered and Sustainable
Edge Computing Systems [87.4519172058185]
An effective energy dispatch mechanism for self-powered wireless networks with edge computing capabilities is studied.
A novel multi-agent meta-reinforcement learning (MAMRL) framework is proposed to solve the formulated problem.
Experimental results show that the proposed MAMRL model can reduce up to 11% non-renewable energy usage and by 22.4% the energy cost.
arXiv Detail & Related papers (2020-02-20T04:58:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.