How to craft a deep reinforcement learning policy for wind farm flow control
- URL: http://arxiv.org/abs/2506.06204v1
- Date: Fri, 06 Jun 2025 16:07:05 GMT
- Title: How to craft a deep reinforcement learning policy for wind farm flow control
- Authors: Elie Kadoche, Pascal Bianchi, Florence Carton, Philippe Ciblat, Damien Ernst,
- Abstract summary: Wake effects between turbines can significantly reduce overall energy production in wind farms.<n>Existing machine learning approaches are limited to quasi-static wind conditions or small wind farms.<n>This work presents a new deep reinforcement learning methodology to develop a wake steering policy.
- Score: 5.195101477698898
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Within wind farms, wake effects between turbines can significantly reduce overall energy production. Wind farm flow control encompasses methods designed to mitigate these effects through coordinated turbine control. Wake steering, for example, consists in intentionally misaligning certain turbines with the wind to optimize airflow and increase power output. However, designing a robust wake steering controller remains challenging, and existing machine learning approaches are limited to quasi-static wind conditions or small wind farms. This work presents a new deep reinforcement learning methodology to develop a wake steering policy that overcomes these limitations. Our approach introduces a novel architecture that combines graph attention networks and multi-head self-attention blocks, alongside a novel reward function and training strategy. The resulting model computes the yaw angles of each turbine, optimizing energy production in time-varying wind conditions. An empirical study conducted on steady-state, low-fidelity simulation, shows that our model requires approximately 10 times fewer training steps than a fully connected neural network and achieves more robust performance compared to a strong optimization baseline, increasing energy production by up to 14 %. To the best of our knowledge, this is the first deep reinforcement learning-based wake steering controller to generalize effectively across any time-varying wind conditions in a low-fidelity, steady-state numerical simulation setting.
Related papers
- Reinforcement Learning Increases Wind Farm Power Production by Enabling Closed-Loop Collaborative Control [0.0]
Traditional wind farm control operates each turbine independently to maximize individual power output.<n> coordinated wake steering across the entire farm can substantially increase the combined wind farm energy production.<n>First reinforcement learning controller integrated directly with high-fidelity large-eddy simulation.<n>Results establish dynamic flow-responsive control as a transformative approach to wind farm optimization.
arXiv Detail & Related papers (2025-06-25T15:53:12Z) - AI-Enhanced Automatic Design of Efficient Underwater Gliders [60.45821679800442]
Building an automated design framework is challenging due to the complexities of representing glider shapes and the high computational costs associated with modeling complex solid-fluid interactions.<n>We introduce an AI-enhanced automated computational framework designed to overcome these limitations by enabling the creation of underwater robots with non-trivial hull shapes.<n>Our approach involves an algorithm that co-optimizes both shape and control signals, utilizing a reduced-order geometry representation and a differentiable neural-network-based fluid surrogate model.
arXiv Detail & Related papers (2025-04-30T23:55:44Z) - Harvesting energy from turbulent winds with Reinforcement Learning [0.0]
Airborne Wind Energy (AWE) is an emerging technology designed to harness the power of high-altitude winds.<n>AWE is based on flying devices that, tethered to a ground station and driven by the wind, convert its mechanical energy into electrical energy by means of a generator.<n>Our aim is to explore the possibility of replacing these techniques with an approach based on Reinforcement Learning (RL)
arXiv Detail & Related papers (2024-12-18T15:40:40Z) - Deep Reinforcement Learning for Multi-Objective Optimization: Enhancing Wind Turbine Energy Generation while Mitigating Noise Emissions [0.4218593777811082]
We develop a torque-pitch control framework using deep reinforcement learning for wind turbines.
We employ a double deep Q-learning, coupled to a blade element momentum solver, to enable precise control over wind turbine parameters.
arXiv Detail & Related papers (2024-07-18T09:21:51Z) - Function Approximation for Reinforcement Learning Controller for Energy from Spread Waves [69.9104427437916]
Multi-generator Wave Energy Converters (WEC) must handle multiple simultaneous waves coming from different directions called spread waves.
These complex devices need controllers with multiple objectives of energy capture efficiency, reduction of structural stress to limit maintenance, and proactive protection against high waves.
In this paper, we explore different function approximations for the policy and critic networks in modeling the sequential nature of the system dynamics.
arXiv Detail & Related papers (2024-04-17T02:04:10Z) - Reinforcement learning to maximise wind turbine energy generation [0.8437187555622164]
We propose a reinforcement learning strategy to control wind turbine energy generation by actively changing the rotor speed, the rotor yaw angle and the blade pitch angle.
A double deep Q-learning with a prioritized experience replay agent is coupled with a blade element momentum model and is trained to allow control for changing winds.
The agent is trained to decide the best control (speed, yaw, pitch) for simple steady winds and is subsequently challenged with real dynamic turbulent winds, showing good performance.
arXiv Detail & Related papers (2024-02-17T21:35:13Z) - Physics-Inspired Temporal Learning of Quadrotor Dynamics for Accurate
Model Predictive Trajectory Tracking [76.27433308688592]
Accurately modeling quadrotor's system dynamics is critical for guaranteeing agile, safe, and stable navigation.
We present a novel Physics-Inspired Temporal Convolutional Network (PI-TCN) approach to learning quadrotor's system dynamics purely from robot experience.
Our approach combines the expressive power of sparse temporal convolutions and dense feed-forward connections to make accurate system predictions.
arXiv Detail & Related papers (2022-06-07T13:51:35Z) - Neural-Fly Enables Rapid Learning for Agile Flight in Strong Winds [96.74836678572582]
We present a learning-based approach that allows rapid online adaptation by incorporating pretrained representations through deep learning.
Neural-Fly achieves precise flight control with substantially smaller tracking error than state-of-the-art nonlinear and adaptive controllers.
arXiv Detail & Related papers (2022-05-13T21:55:28Z) - Optimizing Airborne Wind Energy with Reinforcement Learning [0.0]
Reinforcement Learning is a technique that learns to associate observations with profitable actions without requiring prior knowledge of the system.
We show that in a simulated environment Reinforcement Learning finds an efficient way to control a kite so that it can tow a vehicle for long distances.
arXiv Detail & Related papers (2022-03-27T10:28:16Z) - Measuring Wind Turbine Health Using Drifting Concepts [55.87342698167776]
We propose two new approaches for the analysis of wind turbine health.
The first method aims at evaluating the decrease or increase in relatively high and low power production.
The second method evaluates the overall drift of the extracted concepts.
arXiv Detail & Related papers (2021-12-09T14:04:55Z) - Scalable Optimization for Wind Farm Control using Coordination Graphs [5.56699571220921]
A wind farm controller is required to match the farm's power production with a power demand imposed by the grid operator.
This is a non-trivial optimization problem, as complex dependencies exist between the wind turbines.
We propose a new learning method for wind farm control that leverages the sparse wind farm structure to factorize the optimization problem.
arXiv Detail & Related papers (2021-01-19T20:12:30Z) - NeurOpt: Neural network based optimization for building energy
management and climate control [58.06411999767069]
We propose a data-driven control algorithm based on neural networks to reduce this cost of model identification.
We validate our learning and control algorithms on a two-story building with ten independently controlled zones, located in Italy.
arXiv Detail & Related papers (2020-01-22T00:51:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.