A storage expansion planning framework using reinforcement learning and
simulation-based optimization
- URL: http://arxiv.org/abs/2001.03507v3
- Date: Wed, 24 Mar 2021 18:04:14 GMT
- Title: A storage expansion planning framework using reinforcement learning and
simulation-based optimization
- Authors: S. Tsianikas, N. Yousefi, J. Zhou, M. Rodgers, D. W. Coit
- Abstract summary: Energy storage is crucial wherever distributed generation is abundant, such as in microgrids.
determining which type of storage technology to invest in, along with the appropriate timing and capacity is a critical research question.
We show that it is possible to derive better engineering solutions that would point to the types of energy storage units which could be at the core of future microgrid applications.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the wake of the highly electrified future ahead of us, the role of energy
storage is crucial wherever distributed generation is abundant, such as in
microgrid settings. Given the variety of storage options that are becoming more
and more economical, determining which type of storage technology to invest in,
along with the appropriate timing and capacity becomes a critical research
question. It is inevitable that these problems will continue to become
increasingly relevant in the future and require strategic planning and holistic
and modern frameworks in order to be solved. Reinforcement Learning algorithms
have already proven to be successful in problems where sequential
decision-making is inherent. In the operations planning area, these algorithms
are already used but mostly in short-term problems with well-defined
constraints. On the contrary, we expand and tailor these techniques to
long-term planning by utilizing model-free algorithms combined with
simulation-based models. A model and expansion plan have been developed to
optimally determine microgrid designs as they evolve to dynamically react to
changing conditions and to exploit energy storage capabilities. We show that it
is possible to derive better engineering solutions that would point to the
types of energy storage units which could be at the core of future microgrid
applications. Another key finding is that the optimal storage capacity
threshold for a system depends heavily on the price movements of the available
storage units. By utilizing the proposed approaches, it is possible to model
inherent problem uncertainties and optimize the whole streamline of sequential
investment decision-making.
Related papers
- Optimizing Load Scheduling in Power Grids Using Reinforcement Learning and Markov Decision Processes [0.0]
This paper proposes a reinforcement learning (RL) approach to address the challenges of dynamic load scheduling.
Our results show that the RL-based method provides a robust and scalable solution for real-time load scheduling.
arXiv Detail & Related papers (2024-10-23T09:16:22Z) - Bridging the Gap to Next Generation Power System Planning and Operation with Quantum Computation [0.0]
The integration of renewable energy generations, varying nature loads, importance of active role of distribution system and consumer participation in grid operation has changed the landscape of classical power grids.
Although sophisticated computations to process gigantic volume of data to produce useful information is the paradigm of future grid operations, it brings along the burden of computational complexity.
Advancements in quantum technologies holds promising solution for dealing with demanding computational complexity of power system related applications.
arXiv Detail & Related papers (2024-08-05T12:41:28Z) - Learning-assisted Stochastic Capacity Expansion Planning: A Bayesian Optimization Approach [3.124884279860061]
Large-scale capacity expansion problems (CEPs) are central to costeffective decarbonization of regional energy systems.
Here, we propose a learning-assisted approximate solution method to tractably solve two-stage CEPs.
We show that our approach yields an estimated cost savings of up to 3.8% in comparison to series aggregation approaches.
arXiv Detail & Related papers (2024-01-19T01:40:58Z) - Multi-market Energy Optimization with Renewables via Reinforcement
Learning [1.0878040851638]
This paper introduces a deep reinforcement learning framework for optimizing the operations of power plants pairing renewable energy with storage.
The framework handles complexities such as time coupling by storage devices, uncertainty in renewable generation and energy prices, and non-linear storage models.
It utilizes RL to incorporate complex storage models, overcoming restrictions of optimization-based methods that require convex and differentiable component models.
arXiv Detail & Related papers (2023-06-13T21:35:24Z) - Distributed Reinforcement Learning for Privacy-Preserving Dynamic Edge
Caching [91.50631418179331]
A privacy-preserving distributed deep policy gradient (P2D3PG) is proposed to maximize the cache hit rates of devices in the MEC networks.
We convert the distributed optimizations into model-free Markov decision process problems and then introduce a privacy-preserving federated learning method for popularity prediction.
arXiv Detail & Related papers (2021-10-20T02:48:27Z) - Deep Reinforcement Learning for Constrained Field Development
Optimization in Subsurface Two-phase Flow [0.32622301272834514]
We present a deep reinforcement learning-based artificial intelligence agent that could provide optimized development plans.
The agent provides a mapping from a given state of the reservoir model, constraints, and economic condition to the optimal decision.
arXiv Detail & Related papers (2021-03-31T07:08:24Z) - Modeling the Second Player in Distributionally Robust Optimization [90.25995710696425]
We argue for the use of neural generative models to characterize the worst-case distribution.
This approach poses a number of implementation and optimization challenges.
We find that the proposed approach yields models that are more robust than comparable baselines.
arXiv Detail & Related papers (2021-03-18T14:26:26Z) - Offline Model-Based Optimization via Normalized Maximum Likelihood
Estimation [101.22379613810881]
We consider data-driven optimization problems where one must maximize a function given only queries at a fixed set of points.
This problem setting emerges in many domains where function evaluation is a complex and expensive process.
We propose a tractable approximation that allows us to scale our method to high-capacity neural network models.
arXiv Detail & Related papers (2021-02-16T06:04:27Z) - Decentralized MCTS via Learned Teammate Models [89.24858306636816]
We present a trainable online decentralized planning algorithm based on decentralized Monte Carlo Tree Search.
We show that deep learning and convolutional neural networks can be employed to produce accurate policy approximators.
arXiv Detail & Related papers (2020-03-19T13:10:20Z) - Risk-Aware Energy Scheduling for Edge Computing with Microgrid: A
Multi-Agent Deep Reinforcement Learning Approach [82.6692222294594]
We study a risk-aware energy scheduling problem for a microgrid-powered MEC network.
We derive the solution by applying a multi-agent deep reinforcement learning (MADRL)-based advantage actor-critic (A3C) algorithm with shared neural networks.
arXiv Detail & Related papers (2020-02-21T02:14:38Z) - Optimizing Wireless Systems Using Unsupervised and
Reinforced-Unsupervised Deep Learning [96.01176486957226]
Resource allocation and transceivers in wireless networks are usually designed by solving optimization problems.
In this article, we introduce unsupervised and reinforced-unsupervised learning frameworks for solving both variable and functional optimization problems.
arXiv Detail & Related papers (2020-01-03T11:01:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.