Reinforcement learning for Energies of the future and carbon neutrality:
a Challenge Design
- URL: http://arxiv.org/abs/2207.10330v1
- Date: Thu, 21 Jul 2022 06:56:46 GMT
- Title: Reinforcement learning for Energies of the future and carbon neutrality:
a Challenge Design
- Authors: Ga\"etan Serr\'e (TAU, Inria, LISN), Eva Boguslawski (RTE, TAU, LISN,
Inria), Benjamin Donnot (RTE), Adrien Pav\~ao (TAU, LISN, Inria), Isabelle
Guyon (TAU, LISN, Inria), Antoine Marot (RTE)
- Abstract summary: This challenge belongs to a series started in 2019 under the name "Learning to run a power network" (L2RPN)
We introduce new more realistic scenarios proposed by RTE to reach carbon neutrality by 2050.
We provide a baseline using state-of-the-art reinforcement learning algorithm to stimulate the future participants.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Current rapid changes in climate increase the urgency to change energy
production and consumption management, to reduce carbon and other green-house
gas production. In this context, the French electricity network management
company RTE (R{\'e}seau de Transport d'{\'E}lectricit{\'e}) has recently
published the results of an extensive study outlining various scenarios for
tomorrow's French power management. We propose a challenge that will test the
viability of such a scenario. The goal is to control electricity transportation
in power networks, while pursuing multiple objectives: balancing production and
consumption, minimizing energetic losses, and keeping people and equipment safe
and particularly avoiding catastrophic failures. While the importance of the
application provides a goal in itself, this challenge also aims to push the
state-of-the-art in a branch of Artificial Intelligence (AI) called
Reinforcement Learning (RL), which offers new possibilities to tackle control
problems. In particular, various aspects of the combination of Deep Learning
and RL called Deep Reinforcement Learning remain to be harnessed in this
application domain. This challenge belongs to a series started in 2019 under
the name "Learning to run a power network" (L2RPN). In this new edition, we
introduce new more realistic scenarios proposed by RTE to reach carbon
neutrality by 2050, retiring fossil fuel electricity production, increasing
proportions of renewable and nuclear energy and introducing batteries.
Furthermore, we provide a baseline using state-of-the-art reinforcement
learning algorithm to stimulate the future participants.
Related papers
- PowRL: A Reinforcement Learning Framework for Robust Management of Power
Networks [2.9822184411723645]
This paper presents a reinforcement learning framework, PowRL, to mitigate the effects of unexpected network events.
PowRL is benchmarked on a variety of competition datasets hosted by the L2RPN (Learning to Run a Power Network)
arXiv Detail & Related papers (2022-12-05T16:22:12Z) - Distributed Energy Management and Demand Response in Smart Grids: A
Multi-Agent Deep Reinforcement Learning Framework [53.97223237572147]
This paper presents a multi-agent Deep Reinforcement Learning (DRL) framework for autonomous control and integration of renewable energy resources into smart power grid systems.
In particular, the proposed framework jointly considers demand response (DR) and distributed energy management (DEM) for residential end-users.
arXiv Detail & Related papers (2022-11-29T01:18:58Z) - Power Grid Congestion Management via Topology Optimization with
AlphaZero [0.27998963147546135]
We propose an AlphaZero-based grid topology optimization agent as a non-costly, carbon-free congestion management alternative.
Our approach ranked 1st in the WCCI 2022 Learning to Run a Power Network (L2RPN) competition.
arXiv Detail & Related papers (2022-11-10T14:39:28Z) - Low Emission Building Control with Zero-Shot Reinforcement Learning [70.70479436076238]
Control via Reinforcement Learning (RL) has been shown to significantly improve building energy efficiency.
We show it is possible to obtain emission-reducing policies without a priori--a paradigm we call zero-shot building control.
arXiv Detail & Related papers (2022-08-12T17:13:25Z) - Improving Robustness of Reinforcement Learning for Power System Control
with Adversarial Training [71.7750435554693]
We show that several state-of-the-art RL agents proposed for power system control are vulnerable to adversarial attacks.
Specifically, we use an adversary Markov Decision Process to learn an attack policy, and demonstrate the potency of our attack.
We propose to use adversarial training to increase the robustness of RL agent against attacks and avoid infeasible operational decisions.
arXiv Detail & Related papers (2021-10-18T00:50:34Z) - Modelling the transition to a low-carbon energy supply [91.3755431537592]
A transition to a low-carbon electricity supply is crucial to limit the impacts of climate change.
Reducing carbon emissions could help prevent the world from reaching a tipping point, where runaway emissions are likely.
Runaway emissions could lead to extremes in weather conditions around the world.
arXiv Detail & Related papers (2021-09-25T12:37:05Z) - Action Set Based Policy Optimization for Safe Power Grid Management [8.156111849078439]
Reinforcement learning (RL) has been employed to provide sequential decision-making in power grid management.
We propose a novel method for this problem, which builds on top of the search-based planning algorithm.
In NeurIPS 2020 Learning to Run Power Network (L2RPN) competition, our solution safely managed the power grid and ranked first in both tracks.
arXiv Detail & Related papers (2021-06-29T09:36:36Z) - Learning to run a Power Network Challenge: a Retrospective Analysis [6.442347402316506]
We have designed a L2RPN challenge to encourage the development of reinforcement learning solutions to key problems in the next-generation power networks.
The main contribution of this challenge is our proposed comprehensive Grid2Op framework, and associated benchmark.
We present the benchmark suite and analyse the winning solutions of the challenge, observing one super-human performance demonstration by the best agent.
arXiv Detail & Related papers (2021-03-02T09:52:24Z) - CityLearn: Standardizing Research in Multi-Agent Reinforcement Learning
for Demand Response and Urban Energy Management [0.0]
In the US, buildings represent about 70% of the total electricity demand and demand response has the potential for reducing peaks of electricity by about 20%.
Reinforcement learning algorithms have gained increased interest in the past years.
CityLearn is an OpenAI Gym Environment which allows researchers to implement, share, replicate, and compare their implementations of RL for demand response.
arXiv Detail & Related papers (2020-12-18T20:41:53Z) - Risk-Aware Energy Scheduling for Edge Computing with Microgrid: A
Multi-Agent Deep Reinforcement Learning Approach [82.6692222294594]
We study a risk-aware energy scheduling problem for a microgrid-powered MEC network.
We derive the solution by applying a multi-agent deep reinforcement learning (MADRL)-based advantage actor-critic (A3C) algorithm with shared neural networks.
arXiv Detail & Related papers (2020-02-21T02:14:38Z) - Towards the Systematic Reporting of the Energy and Carbon Footprints of
Machine Learning [68.37641996188133]
We introduce a framework for tracking realtime energy consumption and carbon emissions.
We create a leaderboard for energy efficient reinforcement learning algorithms.
We propose strategies for mitigation of carbon emissions and reduction of energy consumption.
arXiv Detail & Related papers (2020-01-31T05:12:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.