Reinforcement learning to maximise wind turbine energy generation
- URL: http://arxiv.org/abs/2402.11384v1
- Date: Sat, 17 Feb 2024 21:35:13 GMT
- Title: Reinforcement learning to maximise wind turbine energy generation
- Authors: Daniel Soler, Oscar Mari\~no, David Huergo, Mart\'in de Frutos,
Esteban Ferrer
- Abstract summary: We propose a reinforcement learning strategy to control wind turbine energy generation by actively changing the rotor speed, the rotor yaw angle and the blade pitch angle.
A double deep Q-learning with a prioritized experience replay agent is coupled with a blade element momentum model and is trained to allow control for changing winds.
The agent is trained to decide the best control (speed, yaw, pitch) for simple steady winds and is subsequently challenged with real dynamic turbulent winds, showing good performance.
- Score: 0.8437187555622164
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a reinforcement learning strategy to control wind turbine energy
generation by actively changing the rotor speed, the rotor yaw angle and the
blade pitch angle. A double deep Q-learning with a prioritized experience
replay agent is coupled with a blade element momentum model and is trained to
allow control for changing winds. The agent is trained to decide the best
control (speed, yaw, pitch) for simple steady winds and is subsequently
challenged with real dynamic turbulent winds, showing good performance. The
double deep Q- learning is compared with a classic value iteration
reinforcement learning control and both strategies outperform a classic PID
control in all environments. Furthermore, the reinforcement learning approach
is well suited to changing environments including turbulent/gusty winds,
showing great adaptability. Finally, we compare all control strategies with
real winds and compute the annual energy production. In this case, the double
deep Q-learning algorithm also outperforms classic methodologies.
Related papers
- ControlNeXt: Powerful and Efficient Control for Image and Video Generation [59.62289489036722]
We propose ControlNeXt: a powerful and efficient method for controllable image and video generation.
We first design a more straightforward and efficient architecture, replacing heavy additional branches with minimal additional cost.
As for training, we reduce up to 90% of learnable parameters compared to the alternatives.
arXiv Detail & Related papers (2024-08-12T11:41:18Z) - Deep Reinforcement Learning for Multi-Objective Optimization: Enhancing Wind Turbine Energy Generation while Mitigating Noise Emissions [0.4218593777811082]
We develop a torque-pitch control framework using deep reinforcement learning for wind turbines.
We employ a double deep Q-learning, coupled to a blade element momentum solver, to enable precise control over wind turbine parameters.
arXiv Detail & Related papers (2024-07-18T09:21:51Z) - A Novel Correlation-optimized Deep Learning Method for Wind Speed
Forecast [12.61580086941575]
The increasing installation rate of wind power poses great challenges to the global power system.
Deep learning is progressively applied to the wind speed prediction.
New cognition and memory units (CMU) are designed to reinforce traditional deep learning framework.
arXiv Detail & Related papers (2023-06-03T02:47:46Z) - Skip Training for Multi-Agent Reinforcement Learning Controller for
Industrial Wave Energy Converters [94.84709449845352]
Recent Wave Energy Converters (WEC) are equipped with multiple legs and generators to maximize energy generation.
Traditional controllers have shown limitations to capture complex wave patterns and the controllers must efficiently maximize the energy capture.
This paper introduces a Multi-Agent Reinforcement Learning controller (MARL), which outperforms the traditionally used spring damper controller.
arXiv Detail & Related papers (2022-09-13T00:20:31Z) - Neural-Fly Enables Rapid Learning for Agile Flight in Strong Winds [96.74836678572582]
We present a learning-based approach that allows rapid online adaptation by incorporating pretrained representations through deep learning.
Neural-Fly achieves precise flight control with substantially smaller tracking error than state-of-the-art nonlinear and adaptive controllers.
arXiv Detail & Related papers (2022-05-13T21:55:28Z) - Optimizing Airborne Wind Energy with Reinforcement Learning [0.0]
Reinforcement Learning is a technique that learns to associate observations with profitable actions without requiring prior knowledge of the system.
We show that in a simulated environment Reinforcement Learning finds an efficient way to control a kite so that it can tow a vehicle for long distances.
arXiv Detail & Related papers (2022-03-27T10:28:16Z) - Measuring Wind Turbine Health Using Drifting Concepts [55.87342698167776]
We propose two new approaches for the analysis of wind turbine health.
The first method aims at evaluating the decrease or increase in relatively high and low power production.
The second method evaluates the overall drift of the extracted concepts.
arXiv Detail & Related papers (2021-12-09T14:04:55Z) - Improving Robustness of Reinforcement Learning for Power System Control
with Adversarial Training [71.7750435554693]
We show that several state-of-the-art RL agents proposed for power system control are vulnerable to adversarial attacks.
Specifically, we use an adversary Markov Decision Process to learn an attack policy, and demonstrate the potency of our attack.
We propose to use adversarial training to increase the robustness of RL agent against attacks and avoid infeasible operational decisions.
arXiv Detail & Related papers (2021-10-18T00:50:34Z) - Meta-Learning-Based Robust Adaptive Flight Control Under Uncertain Wind
Conditions [13.00214468719929]
Realtime model learning is challenging for complex dynamical systems, such as drones flying in variable wind conditions.
We propose an online composite adaptation method that treats outputs from a deep neural network as a set of basis functions.
We validate our approach by flying a drone in an open air wind tunnel under varying wind conditions and along challenging trajectories.
arXiv Detail & Related papers (2021-03-02T18:43:59Z) - Scalable Optimization for Wind Farm Control using Coordination Graphs [5.56699571220921]
A wind farm controller is required to match the farm's power production with a power demand imposed by the grid operator.
This is a non-trivial optimization problem, as complex dependencies exist between the wind turbines.
We propose a new learning method for wind farm control that leverages the sparse wind farm structure to factorize the optimization problem.
arXiv Detail & Related papers (2021-01-19T20:12:30Z) - Learning a Contact-Adaptive Controller for Robust, Efficient Legged
Locomotion [95.1825179206694]
We present a framework that synthesizes robust controllers for a quadruped robot.
A high-level controller learns to choose from a set of primitives in response to changes in the environment.
A low-level controller that utilizes an established control method to robustly execute the primitives.
arXiv Detail & Related papers (2020-09-21T16:49:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.