Battery and Hydrogen Energy Storage Control in a Smart Energy Network
with Flexible Energy Demand using Deep Reinforcement Learning
- URL: http://arxiv.org/abs/2208.12779v1
- Date: Fri, 26 Aug 2022 16:47:48 GMT
- Title: Battery and Hydrogen Energy Storage Control in a Smart Energy Network
with Flexible Energy Demand using Deep Reinforcement Learning
- Authors: Cephas Samende, Zhong Fan and Jun Cao
- Abstract summary: We introduce a hybrid energy storage system composed of battery and hydrogen energy storage.
We propose a deep reinforcement learning-based control strategy to optimise the scheduling of the hybrid energy storage system and energy demand in real-time.
- Score: 2.5666730153464465
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Smart energy networks provide for an effective means to accommodate high
penetrations of variable renewable energy sources like solar and wind, which
are key for deep decarbonisation of energy production. However, given the
variability of the renewables as well as the energy demand, it is imperative to
develop effective control and energy storage schemes to manage the variable
energy generation and achieve desired system economics and environmental goals.
In this paper, we introduce a hybrid energy storage system composed of battery
and hydrogen energy storage to handle the uncertainties related to electricity
prices, renewable energy production and consumption. We aim to improve
renewable energy utilisation and minimise energy costs and carbon emissions
while ensuring energy reliability and stability within the network. To achieve
this, we propose a multi-agent deep deterministic policy gradient approach,
which is a deep reinforcement learning-based control strategy to optimise the
scheduling of the hybrid energy storage system and energy demand in real-time.
The proposed approach is model-free and does not require explicit knowledge and
rigorous mathematical models of the smart energy network environment.
Simulation results based on real-world data show that: (i) integration and
optimised operation of the hybrid energy storage system and energy demand
reduces carbon emissions by 78.69%, improves cost savings by 23.5% and
renewable energy utilisation by over 13.2% compared to other baseline models
and (ii) the proposed algorithm outperforms the state-of-the-art self-learning
algorithms like deep-Q network.
Related papers
- Green Multi-Objective Scheduling -- A memetic NSGA-III for flexible production with real-time energy cost and emissions [0.0]
This study focuses on industries adjusting production to real-time energy markets, offering flexible consumption to the grid.
We present a novel memetic NSGA-III to minimize makespan, energy cost, and emissions, integrating real energy market data.
arXiv Detail & Related papers (2024-05-23T09:11:21Z) - Decentralized Energy Marketplace via NFTs and AI-based Agents [4.149465156450793]
The paper introduces an advanced Decentralized Energy Marketplace (DEM) integrating blockchain technology and artificial intelligence.
The proposed framework uses Non-Fungible Tokens (NFTs) to represent unique energy profiles in a transparent and secure trading environment.
A notable innovation is the use of smart contracts, ensuring high efficiency and integrity in energy transactions.
arXiv Detail & Related papers (2023-11-17T09:15:43Z) - Empowering Distributed Solutions in Renewable Energy Systems and Grid
Optimization [3.8979646385036175]
Machine learning (ML) advancements play a crucial role in empowering renewable energy sources and improving grid management.
The incorporation of big data and ML into smart grids offers several advantages, including heightened energy efficiency.
However, challenges like handling large data volumes, ensuring cybersecurity, and obtaining specialized expertise must be addressed.
arXiv Detail & Related papers (2023-10-24T02:45:16Z) - Predicting Short Term Energy Demand in Smart Grid: A Deep Learning Approach for Integrating Renewable Energy Sources in Line with SDGs 7, 9, and 13 [0.0]
We propose a deep learning model for predicting energy demand in a smart power grid.
We use long short-term memory networks to capture complex patterns and dependencies in energy demand data.
The proposed model can accurately predict energy demand with a mean absolute error of 1.4%.
arXiv Detail & Related papers (2023-04-08T12:30:59Z) - Combating Uncertainties in Wind and Distributed PV Energy Sources Using
Integrated Reinforcement Learning and Time-Series Forecasting [2.774390661064003]
unpredictability of renewable energy generation poses challenges for electricity providers and utility companies.
We propose a novel framework with two objectives: (i) combating uncertainty of renewable energy in smart grid by leveraging time-series forecasting with Long-Short Term Memory (LSTM) solutions, and (ii) establishing distributed and dynamic decision-making framework with multi-agent reinforcement learning using Deep Deterministic Policy Gradient (DDPG) algorithm.
arXiv Detail & Related papers (2023-02-27T19:12:50Z) - Distributed Energy Management and Demand Response in Smart Grids: A
Multi-Agent Deep Reinforcement Learning Framework [53.97223237572147]
This paper presents a multi-agent Deep Reinforcement Learning (DRL) framework for autonomous control and integration of renewable energy resources into smart power grid systems.
In particular, the proposed framework jointly considers demand response (DR) and distributed energy management (DEM) for residential end-users.
arXiv Detail & Related papers (2022-11-29T01:18:58Z) - A Multi-Agent Deep Reinforcement Learning Approach for a Distributed
Energy Marketplace in Smart Grids [58.666456917115056]
This paper presents a Reinforcement Learning based energy market for a prosumer dominated microgrid.
The proposed market model facilitates a real-time and demanddependent dynamic pricing environment, which reduces grid costs and improves the economic benefits for prosumers.
arXiv Detail & Related papers (2020-09-23T02:17:51Z) - Risk-Aware Energy Scheduling for Edge Computing with Microgrid: A
Multi-Agent Deep Reinforcement Learning Approach [82.6692222294594]
We study a risk-aware energy scheduling problem for a microgrid-powered MEC network.
We derive the solution by applying a multi-agent deep reinforcement learning (MADRL)-based advantage actor-critic (A3C) algorithm with shared neural networks.
arXiv Detail & Related papers (2020-02-21T02:14:38Z) - Multi-Agent Meta-Reinforcement Learning for Self-Powered and Sustainable
Edge Computing Systems [87.4519172058185]
An effective energy dispatch mechanism for self-powered wireless networks with edge computing capabilities is studied.
A novel multi-agent meta-reinforcement learning (MAMRL) framework is proposed to solve the formulated problem.
Experimental results show that the proposed MAMRL model can reduce up to 11% non-renewable energy usage and by 22.4% the energy cost.
arXiv Detail & Related papers (2020-02-20T04:58:07Z) - Towards the Systematic Reporting of the Energy and Carbon Footprints of
Machine Learning [68.37641996188133]
We introduce a framework for tracking realtime energy consumption and carbon emissions.
We create a leaderboard for energy efficient reinforcement learning algorithms.
We propose strategies for mitigation of carbon emissions and reduction of energy consumption.
arXiv Detail & Related papers (2020-01-31T05:12:59Z) - NeurOpt: Neural network based optimization for building energy
management and climate control [58.06411999767069]
We propose a data-driven control algorithm based on neural networks to reduce this cost of model identification.
We validate our learning and control algorithms on a two-story building with ten independently controlled zones, located in Italy.
arXiv Detail & Related papers (2020-01-22T00:51:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.