Towards Efficient Multi-Objective Optimisation for Real-World Power Grid Topology Control
- URL: http://arxiv.org/abs/2502.00034v1
- Date: Fri, 24 Jan 2025 21:40:19 GMT
- Title: Towards Efficient Multi-Objective Optimisation for Real-World Power Grid Topology Control
- Authors: Yassine El Manyari, Anton R. Fuxjager, Stefan Zahlner, Joost Van Dijk, Alberto Castagna, Davide Barbieri, Jan Viebahn, Marcel Wasserer,
- Abstract summary: We present a two-phase, efficient and scalable Multi-Objective optimisation (MOO) method designed for grid topology control.
We validate our approach using historical data from TenneT, a European Transmission System Operator (TSO)
Based on current congestion costs and inefficiencies in grid operations, adopting our approach by TSOs could potentially save millions of euros annually.
- Score: 0.1806830971023738
- License:
- Abstract: Power grid operators face increasing difficulties in the control room as the increase in energy demand and the shift to renewable energy introduce new complexities in managing congestion and maintaining a stable supply. Effective grid topology control requires advanced tools capable of handling multi-objective trade-offs. While Reinforcement Learning (RL) offers a promising framework for tackling such challenges, existing Multi-Objective Reinforcement Learning (MORL) approaches fail to scale to the large state and action spaces inherent in real-world grid operations. Here we present a two-phase, efficient and scalable Multi-Objective Optimisation (MOO) method designed for grid topology control, combining an efficient RL learning phase with a rapid planning phase to generate day-ahead plans for unseen scenarios. We validate our approach using historical data from TenneT, a European Transmission System Operator (TSO), demonstrating minimal deployment time, generating day-ahead plans within 4-7 minutes with strong performance. These results underline the potential of our scalable method to support real-world power grid management, offering a practical, computationally efficient, and time-effective tool for operational planning. Based on current congestion costs and inefficiencies in grid operations, adopting our approach by TSOs could potentially save millions of euros annually, providing a compelling economic incentive for its integration in the control room.
Related papers
- Task Delay and Energy Consumption Minimization for Low-altitude MEC via Evolutionary Multi-objective Deep Reinforcement Learning [52.64813150003228]
The low-altitude economy (LAE), driven by unmanned aerial vehicles (UAVs) and other aircraft, has revolutionized fields such as transportation, agriculture, and environmental monitoring.
In the upcoming six-generation (6G) era, UAV-assisted mobile edge computing (MEC) is particularly crucial in challenging environments such as mountainous or disaster-stricken areas.
The task offloading problem is one of the key issues in UAV-assisted MEC, primarily addressing the trade-off between minimizing the task delay and the energy consumption of the UAV.
arXiv Detail & Related papers (2025-01-11T02:32:42Z) - Optimizing Load Scheduling in Power Grids Using Reinforcement Learning and Markov Decision Processes [0.0]
This paper proposes a reinforcement learning (RL) approach to address the challenges of dynamic load scheduling.
Our results show that the RL-based method provides a robust and scalable solution for real-time load scheduling.
arXiv Detail & Related papers (2024-10-23T09:16:22Z) - Hybrid Reinforcement Learning for Optimizing Pump Sustainability in
Real-World Water Distribution Networks [55.591662978280894]
This article addresses the pump-scheduling optimization problem to enhance real-time control of real-world water distribution networks (WDNs)
Our primary objectives are to adhere to physical operational constraints while reducing energy consumption and operational costs.
Traditional optimization techniques, such as evolution-based and genetic algorithms, often fall short due to their lack of convergence guarantees.
arXiv Detail & Related papers (2023-10-13T21:26:16Z) - DClEVerNet: Deep Combinatorial Learning for Efficient EV Charging
Scheduling in Large-scale Networked Facilities [5.78463306498655]
Electric vehicles (EVs) might stress distribution networks significantly, leaving their performance degraded and jeopardized stability.
Modern power grids require coordinated or smart'' charging strategies capable of optimizing EV charging scheduling in a scalable and efficient fashion.
We formulate a time-coupled binary optimization problem that maximizes EV users' total welfare gain while accounting for the network's available power capacity and stations' occupancy limits.
arXiv Detail & Related papers (2023-05-18T14:03:47Z) - Sustainable AIGC Workload Scheduling of Geo-Distributed Data Centers: A
Multi-Agent Reinforcement Learning Approach [48.18355658448509]
Recent breakthroughs in generative artificial intelligence have triggered a surge in demand for machine learning training, which poses significant cost burdens and environmental challenges due to its substantial energy consumption.
Scheduling training jobs among geographically distributed cloud data centers unveils the opportunity to optimize the usage of computing capacity powered by inexpensive and low-carbon energy.
We propose an algorithm based on multi-agent reinforcement learning and actor-critic methods to learn the optimal collaborative scheduling strategy through interacting with a cloud system built with real-life workload patterns, energy prices, and carbon intensities.
arXiv Detail & Related papers (2023-04-17T02:12:30Z) - Real-time scheduling of renewable power systems through planning-based
reinforcement learning [13.65517429683729]
renewable energy sources have posed significant challenges to traditional power scheduling.
New developments in reinforcement learning have demonstrated the potential to solve this problem.
We are the first to propose a systematic solution based on the state-of-the-art reinforcement learning algorithm and the real power grid environment.
arXiv Detail & Related papers (2023-03-09T12:19:20Z) - Reinforcement Learning Based Power Grid Day-Ahead Planning and
AI-Assisted Control [0.27998963147546135]
We introduce a congestion management approach consisting of a redispatching agent and a machine learning-based optimization agent.
Compared to a typical redispatching-only agent, it was able to keep a simulated grid in operation longer while at the same time reducing operational cost.
The aim of this paper is to bring this promising technology closer to the real world of power grid operation.
arXiv Detail & Related papers (2023-02-15T13:38:40Z) - Distributed Energy Management and Demand Response in Smart Grids: A
Multi-Agent Deep Reinforcement Learning Framework [53.97223237572147]
This paper presents a multi-agent Deep Reinforcement Learning (DRL) framework for autonomous control and integration of renewable energy resources into smart power grid systems.
In particular, the proposed framework jointly considers demand response (DR) and distributed energy management (DEM) for residential end-users.
arXiv Detail & Related papers (2022-11-29T01:18:58Z) - Stabilizing Voltage in Power Distribution Networks via Multi-Agent
Reinforcement Learning with Transformer [128.19212716007794]
We propose a Transformer-based Multi-Agent Actor-Critic framework (T-MAAC) to stabilize voltage in power distribution networks.
In addition, we adopt a novel auxiliary-task training process tailored to the voltage control task, which improves the sample efficiency.
arXiv Detail & Related papers (2022-06-08T07:48:42Z) - Scheduling and Power Control for Wireless Multicast Systems via Deep
Reinforcement Learning [33.737301955006345]
Multicasting in wireless systems is a way to exploit the redundancy in user requests in a Content Centric Network.
Power control and optimal scheduling can significantly improve the wireless multicast network's performance under fading.
We show that power control policy can be learnt for reasonably large systems via this approach.
arXiv Detail & Related papers (2020-09-27T15:59:44Z) - NeurOpt: Neural network based optimization for building energy
management and climate control [58.06411999767069]
We propose a data-driven control algorithm based on neural networks to reduce this cost of model identification.
We validate our learning and control algorithms on a two-story building with ten independently controlled zones, located in Italy.
arXiv Detail & Related papers (2020-01-22T00:51:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.