Reinforcement Learning Increases Wind Farm Power Production by Enabling Closed-Loop Collaborative Control
- URL: http://arxiv.org/abs/2506.20554v1
- Date: Wed, 25 Jun 2025 15:53:12 GMT
- Title: Reinforcement Learning Increases Wind Farm Power Production by Enabling Closed-Loop Collaborative Control
- Authors: Andrew Mole, Max Weissenbacher, Georgios Rigas, Sylvain Laizet,
- Abstract summary: Traditional wind farm control operates each turbine independently to maximize individual power output.<n> coordinated wake steering across the entire farm can substantially increase the combined wind farm energy production.<n>First reinforcement learning controller integrated directly with high-fidelity large-eddy simulation.<n>Results establish dynamic flow-responsive control as a transformative approach to wind farm optimization.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Traditional wind farm control operates each turbine independently to maximize individual power output. However, coordinated wake steering across the entire farm can substantially increase the combined wind farm energy production. Although dynamic closed-loop control has proven effective in flow control applications, wind farm optimization has relied primarily on static, low-fidelity simulators that ignore critical turbulent flow dynamics. In this work, we present the first reinforcement learning (RL) controller integrated directly with high-fidelity large-eddy simulation (LES), enabling real-time response to atmospheric turbulence through collaborative, dynamic control strategies. Our RL controller achieves a 4.30% increase in wind farm power output compared to baseline operation, nearly doubling the 2.19% gain from static optimal yaw control obtained through Bayesian optimization. These results establish dynamic flow-responsive control as a transformative approach to wind farm optimization, with direct implications for accelerating renewable energy deployment to net-zero targets.
Related papers
- How to craft a deep reinforcement learning policy for wind farm flow control [5.195101477698898]
Wake effects between turbines can significantly reduce overall energy production in wind farms.<n>Existing machine learning approaches are limited to quasi-static wind conditions or small wind farms.<n>This work presents a new deep reinforcement learning methodology to develop a wake steering policy.
arXiv Detail & Related papers (2025-06-06T16:07:05Z) - WFCRL: A Multi-Agent Reinforcement Learning Benchmark for Wind Farm Control [0.9374652839580183]
We introduce WFCRL (Wind Farm Control with Reinforcement Learning), the first open suite of multi-agent reinforcement learning environments for the wind farm control problem.<n>Each turbine is an agent and can learn to adjust its yaw, pitch or torque to maximize the common objective.<n>For each simulator, $10$ wind layouts are provided, including $5$ real wind farms.
arXiv Detail & Related papers (2025-01-23T12:01:17Z) - Deep Reinforcement Learning for Multi-Objective Optimization: Enhancing Wind Turbine Energy Generation while Mitigating Noise Emissions [0.4218593777811082]
We develop a torque-pitch control framework using deep reinforcement learning for wind turbines.
We employ a double deep Q-learning, coupled to a blade element momentum solver, to enable precise control over wind turbine parameters.
arXiv Detail & Related papers (2024-07-18T09:21:51Z) - Function Approximation for Reinforcement Learning Controller for Energy from Spread Waves [69.9104427437916]
Multi-generator Wave Energy Converters (WEC) must handle multiple simultaneous waves coming from different directions called spread waves.
These complex devices need controllers with multiple objectives of energy capture efficiency, reduction of structural stress to limit maintenance, and proactive protection against high waves.
In this paper, we explore different function approximations for the policy and critic networks in modeling the sequential nature of the system dynamics.
arXiv Detail & Related papers (2024-04-17T02:04:10Z) - Long-term Wind Power Forecasting with Hierarchical Spatial-Temporal
Transformer [112.12271800369741]
Wind power is attracting increasing attention around the world due to its renewable, pollution-free, and other advantages.
Accurate wind power forecasting (WPF) can effectively reduce power fluctuations in power system operations.
Existing methods are mainly designed for short-term predictions and lack effective spatial-temporal feature augmentation.
arXiv Detail & Related papers (2023-05-30T04:03:15Z) - Collective Large-scale Wind Farm Multivariate Power Output Control Based
on Hierarchical Communication Multi-Agent Proximal Policy Optimization [5.062455071500403]
Wind power is becoming an increasingly important source of renewable energy worldwide.
Wind farm power control faces significant challenges due to the high system complexity inherent in these farms.
A novel communication-based multi-agent deep reinforcement learning large-scale wind farm multivariate control is proposed to handle this challenge.
arXiv Detail & Related papers (2023-05-17T12:26:08Z) - Learning to Exploit Elastic Actuators for Quadruped Locomotion [7.9585932082270014]
Spring-based actuators in legged locomotion provide energy-efficiency and improved performance, but increase the difficulty of controller design.
We propose to learn model-free controllers directly on the real robot.
We evaluate the proposed approach on the DLR elastic quadruped bert.
arXiv Detail & Related papers (2022-09-15T09:43:17Z) - Skip Training for Multi-Agent Reinforcement Learning Controller for
Industrial Wave Energy Converters [94.84709449845352]
Recent Wave Energy Converters (WEC) are equipped with multiple legs and generators to maximize energy generation.
Traditional controllers have shown limitations to capture complex wave patterns and the controllers must efficiently maximize the energy capture.
This paper introduces a Multi-Agent Reinforcement Learning controller (MARL), which outperforms the traditionally used spring damper controller.
arXiv Detail & Related papers (2022-09-13T00:20:31Z) - Neural-Fly Enables Rapid Learning for Agile Flight in Strong Winds [96.74836678572582]
We present a learning-based approach that allows rapid online adaptation by incorporating pretrained representations through deep learning.
Neural-Fly achieves precise flight control with substantially smaller tracking error than state-of-the-art nonlinear and adaptive controllers.
arXiv Detail & Related papers (2022-05-13T21:55:28Z) - Scalable Optimization for Wind Farm Control using Coordination Graphs [5.56699571220921]
A wind farm controller is required to match the farm's power production with a power demand imposed by the grid operator.
This is a non-trivial optimization problem, as complex dependencies exist between the wind turbines.
We propose a new learning method for wind farm control that leverages the sparse wind farm structure to factorize the optimization problem.
arXiv Detail & Related papers (2021-01-19T20:12:30Z) - Optimizing Mixed Autonomy Traffic Flow With Decentralized Autonomous
Vehicles and Multi-Agent RL [63.52264764099532]
We study the ability of autonomous vehicles to improve the throughput of a bottleneck using a fully decentralized control scheme in a mixed autonomy setting.
We apply multi-agent reinforcement algorithms to this problem and demonstrate that significant improvements in bottleneck throughput, from 20% at a 5% penetration rate to 33% at a 40% penetration rate, can be achieved.
arXiv Detail & Related papers (2020-10-30T22:06:05Z) - Learning a Contact-Adaptive Controller for Robust, Efficient Legged
Locomotion [95.1825179206694]
We present a framework that synthesizes robust controllers for a quadruped robot.
A high-level controller learns to choose from a set of primitives in response to changes in the environment.
A low-level controller that utilizes an established control method to robustly execute the primitives.
arXiv Detail & Related papers (2020-09-21T16:49:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.