A Learning-based Optimal Market Bidding Strategy for Price-Maker Energy
Storage
- URL: http://arxiv.org/abs/2106.02396v1
- Date: Fri, 4 Jun 2021 10:22:58 GMT
- Title: A Learning-based Optimal Market Bidding Strategy for Price-Maker Energy
Storage
- Authors: Mathilde D. Badoual, Scott J. Moura
- Abstract summary: We implement an online Supervised Actor-Critic (SAC) algorithm supervised with a model-based controller -- Model Predictive Control (MPC)
The energy storage agent is trained with this algorithm to optimally bid while learning and adjusting to its impact on the market clearing prices.
Our contribution, thus, is an online and safe SAC algorithm that outperforms the current model-based state-of-the-art.
- Score: 3.0839245814393728
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Load serving entities with storage units reach sizes and performances that
can significantly impact clearing prices in electricity markets. Nevertheless,
price endogeneity is rarely considered in storage bidding strategies and
modeling the electricity market is a challenging task. Meanwhile, model-free
reinforcement learning such as the Actor-Critic are becoming increasingly
popular for designing energy system controllers. Yet implementation frequently
requires lengthy, data-intense, and unsafe trial-and-error training. To fill
these gaps, we implement an online Supervised Actor-Critic (SAC) algorithm,
supervised with a model-based controller -- Model Predictive Control (MPC). The
energy storage agent is trained with this algorithm to optimally bid while
learning and adjusting to its impact on the market clearing prices. We compare
the supervised Actor-Critic algorithm with the MPC algorithm as a supervisor,
finding that the former reaps higher profits via learning. Our contribution,
thus, is an online and safe SAC algorithm that outperforms the current
model-based state-of-the-art.
Related papers
- Optimizing Load Scheduling in Power Grids Using Reinforcement Learning and Markov Decision Processes [0.0]
This paper proposes a reinforcement learning (RL) approach to address the challenges of dynamic load scheduling.
Our results show that the RL-based method provides a robust and scalable solution for real-time load scheduling.
arXiv Detail & Related papers (2024-10-23T09:16:22Z) - Centralized vs. Decentralized Multi-Agent Reinforcement Learning for Enhanced Control of Electric Vehicle Charging Networks [1.9188272016043582]
We introduce a novel approach for distributed and cooperative charging strategy using a Multi-Agent Reinforcement Learning (MARL) framework.
Our method is built upon the Deep Deterministic Policy Gradient (DDPG) algorithm for a group of EVs in a residential community.
Our results indicate that, despite higher policy variances and training complexity, the CTDE-DDPG framework significantly improves charging efficiency by reducing total variation by approximately %36 and charging cost by around %9.1 on average.
arXiv Detail & Related papers (2024-04-18T21:50:03Z) - Deep Reinforcement Learning for Community Battery Scheduling under
Uncertainties of Load, PV Generation, and Energy Prices [5.694872363688119]
This paper presents a deep reinforcement learning (RL) strategy to schedule a community battery system in the presence of uncertainties.
We position the community battery to play a versatile role, in integrating local PV energy, reducing peak load, and exploiting energy price fluctuations for arbitrage.
arXiv Detail & Related papers (2023-12-04T13:45:17Z) - Multi-market Energy Optimization with Renewables via Reinforcement
Learning [1.0878040851638]
This paper introduces a deep reinforcement learning framework for optimizing the operations of power plants pairing renewable energy with storage.
The framework handles complexities such as time coupling by storage devices, uncertainty in renewable generation and energy prices, and non-linear storage models.
It utilizes RL to incorporate complex storage models, overcoming restrictions of optimization-based methods that require convex and differentiable component models.
arXiv Detail & Related papers (2023-06-13T21:35:24Z) - Sustainable AIGC Workload Scheduling of Geo-Distributed Data Centers: A
Multi-Agent Reinforcement Learning Approach [48.18355658448509]
Recent breakthroughs in generative artificial intelligence have triggered a surge in demand for machine learning training, which poses significant cost burdens and environmental challenges due to its substantial energy consumption.
Scheduling training jobs among geographically distributed cloud data centers unveils the opportunity to optimize the usage of computing capacity powered by inexpensive and low-carbon energy.
We propose an algorithm based on multi-agent reinforcement learning and actor-critic methods to learn the optimal collaborative scheduling strategy through interacting with a cloud system built with real-life workload patterns, energy prices, and carbon intensities.
arXiv Detail & Related papers (2023-04-17T02:12:30Z) - Improving Robustness of Reinforcement Learning for Power System Control
with Adversarial Training [71.7750435554693]
We show that several state-of-the-art RL agents proposed for power system control are vulnerable to adversarial attacks.
Specifically, we use an adversary Markov Decision Process to learn an attack policy, and demonstrate the potency of our attack.
We propose to use adversarial training to increase the robustness of RL agent against attacks and avoid infeasible operational decisions.
arXiv Detail & Related papers (2021-10-18T00:50:34Z) - A Multi-Agent Deep Reinforcement Learning Approach for a Distributed
Energy Marketplace in Smart Grids [58.666456917115056]
This paper presents a Reinforcement Learning based energy market for a prosumer dominated microgrid.
The proposed market model facilitates a real-time and demanddependent dynamic pricing environment, which reduces grid costs and improves the economic benefits for prosumers.
arXiv Detail & Related papers (2020-09-23T02:17:51Z) - Demand Responsive Dynamic Pricing Framework for Prosumer Dominated
Microgrids using Multiagent Reinforcement Learning [59.28219519916883]
This paper proposes a new multiagent Reinforcement Learning based decision-making environment for implementing a Real-Time Pricing (RTP) DR technique in a prosumer dominated microgrid.
The proposed technique addresses several shortcomings common to traditional DR methods and provides significant economic benefits to the grid operator and prosumers.
arXiv Detail & Related papers (2020-09-23T01:44:57Z) - Demand-Side Scheduling Based on Multi-Agent Deep Actor-Critic Learning
for Smart Grids [56.35173057183362]
We consider the problem of demand-side energy management, where each household is equipped with a smart meter that is able to schedule home appliances online.
The goal is to minimize the overall cost under a real-time pricing scheme.
We propose the formulation of a smart grid environment as a Markov game.
arXiv Detail & Related papers (2020-05-05T07:32:40Z) - NeurOpt: Neural network based optimization for building energy
management and climate control [58.06411999767069]
We propose a data-driven control algorithm based on neural networks to reduce this cost of model identification.
We validate our learning and control algorithms on a two-story building with ten independently controlled zones, located in Italy.
arXiv Detail & Related papers (2020-01-22T00:51:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.