HydroGym: A Reinforcement Learning Platform for Fluid Dynamics
- URL: http://arxiv.org/abs/2512.17534v1
- Date: Fri, 19 Dec 2025 12:58:06 GMT
- Title: HydroGym: A Reinforcement Learning Platform for Fluid Dynamics
- Authors: Christian Lagemann, Sajeda Mokbel, Miro Gondrum, Mario Rüttgers, Jared Callaham, Ludger Paehler, Samuel Ahnert, Nicholas Zolman, Kai Lagemann, Nikolaus Adams, Matthias Meinke, Wolfgang Schröder, Jean-Christophe Loiseau, Esther Lagemann, Steven L. Brunton,
- Abstract summary: HydroGym is a solver-independent RL platform for flow control research.<n>Our platform includes 42 validated environments spanning from canonical laminar flows to complex three-dimensional turbulent scenarios.
- Score: 2.7789211666404228
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Modeling and controlling fluid flows is critical for several fields of science and engineering, including transportation, energy, and medicine. Effective flow control can lead to, e.g., lift increase, drag reduction, mixing enhancement, and noise reduction. However, controlling a fluid faces several significant challenges, including high-dimensional, nonlinear, and multiscale interactions in space and time. Reinforcement learning (RL) has recently shown great success in complex domains, such as robotics and protein folding, but its application to flow control is hindered by a lack of standardized benchmark platforms and the computational demands of fluid simulations. To address these challenges, we introduce HydroGym, a solver-independent RL platform for flow control research. HydroGym integrates sophisticated flow control benchmarks, scalable runtime infrastructure, and state-of-the-art RL algorithms. Our platform includes 42 validated environments spanning from canonical laminar flows to complex three-dimensional turbulent scenarios, validated over a wide range of Reynolds numbers. We provide non-differentiable solvers for traditional RL and differentiable solvers that dramatically improve sample efficiency through gradient-enhanced optimization. Comprehensive evaluation reveals that RL agents consistently discover robust control principles across configurations, such as boundary layer manipulation, acoustic feedback disruption, and wake reorganization. Transfer learning studies demonstrate that controllers learned at one Reynolds number or geometry adapt efficiently to new conditions, requiring approximately 50% fewer training episodes. The HydroGym platform is highly extensible and scalable, providing a framework for researchers in fluid dynamics, machine learning, and control to add environments, surrogate models, and control algorithms to advance science and technology.
Related papers
- Plug-and-Play Benchmarking of Reinforcement Learning Algorithms for Large-Scale Flow Control [61.155940786140455]
Reinforcement learning (RL) has shown promising results in active flow control (AFC)<n>Current AFC benchmarks rely on external computational fluid dynamics (CFD) solvers, are not fully differentiable, and provide limited 3D and multi-agent support.<n>We introduce FluidGym, the first standalone, fully differentiable benchmark suite for RL in AFC.
arXiv Detail & Related papers (2026-01-21T14:13:44Z) - Physics-informed Neural-operator Predictive Control for Drag Reduction in Turbulent Flows [109.99020160824553]
We propose an efficient deep reinforcement learning framework for modeling and control of turbulent flows.<n>It is model-based RL for predictive control (PC), where both the policy and the observer models for turbulence control are learned jointly.<n>We find that PINO-PC achieves a drag reduction of 39.0% under a bulk-velocity Reynolds number of 15,000, outperforming previous fluid control methods by more than 32%.
arXiv Detail & Related papers (2025-10-03T00:18:26Z) - PICT -- A Differentiable, GPU-Accelerated Multi-Block PISO Solver for Simulation-Coupled Learning Tasks in Fluid Dynamics [62.93137406343609]
We present our fluid simulator PICT, a differentiable pressure-implicit solver coded in PyTorch with Graphics-processing-unit (GPU) support.<n>We first verify the accuracy of both the forward simulation and our derived gradients in various established benchmarks.<n>We show that the gradients provided by our solver can be used to learn complicated turbulence models in 2D and 3D.
arXiv Detail & Related papers (2025-05-22T17:55:10Z) - AI-Enhanced Automatic Design of Efficient Underwater Gliders [60.45821679800442]
Building an automated design framework is challenging due to the complexities of representing glider shapes and the high computational costs associated with modeling complex solid-fluid interactions.<n>We introduce an AI-enhanced automated computational framework designed to overcome these limitations by enabling the creation of underwater robots with non-trivial hull shapes.<n>Our approach involves an algorithm that co-optimizes both shape and control signals, utilizing a reduced-order geometry representation and a differentiable neural-network-based fluid surrogate model.
arXiv Detail & Related papers (2025-04-30T23:55:44Z) - Multi-fidelity Reinforcement Learning Control for Complex Dynamical Systems [42.2790464348673]
We propose a multi-fidelity reinforcement learning framework for controlling instabilities in complex systems.<n>The effect of the proposed framework is demonstrated on two complex dynamics in physics.
arXiv Detail & Related papers (2025-04-08T00:50:15Z) - Invariant Control Strategies for Active Flow Control using Graph Neural Networks [0.0]
We introduce graph neural networks (GNNs) as a promising architecture forReinforcement Learning (RL)-based flow control.<n>GNNs process unstructured, threedimensional flow data, preserving spatial relationships without the constraints of a Cartesian grid.<n>We show that GNN-based control policies achieve comparable performance to existing methods while benefiting from improved generalization properties.
arXiv Detail & Related papers (2025-03-28T09:33:40Z) - SINDy-RL: Interpretable and Efficient Model-Based Reinforcement Learning [5.036739921794781]
SINDy-RL is a framework for combining SINDy and DRL to create efficient, interpretable, and trustworthy representations of the dynamics model, reward function, and control policy.<n>We demonstrate the effectiveness of our approaches on benchmark control environments and flow control problems.
arXiv Detail & Related papers (2024-03-14T05:17:39Z) - Real-World Fluid Directed Rigid Body Control via Deep Reinforcement
Learning [7.714620721734689]
"Box o Flows" is an experimental control system for systematically evaluating RL algorithms in dynamic real-world scenarios.
We show how state-of-the-art model-free RL algorithms can synthesize a variety of complex behaviors via simple reward specifications.
We believe that the insights gained from this preliminary study and the availability of systems like the Box o Flows support the way forward for developing systematic RL algorithms.
arXiv Detail & Related papers (2024-02-08T23:35:03Z) - How to Control Hydrodynamic Force on Fluidic Pinball via Deep
Reinforcement Learning [3.1635451288803638]
We present a DRL-based real-time feedback strategy to control the hydrodynamic force on fluidic pinball.
By adequately designing reward functions and encoding historical observations, the DRL-based control was shown to make reasonable and valid control decisions.
One of these results was analyzed by a machine learning model that enabled us to shed light on the basis of decision-making and physical mechanisms of the force tracking process.
arXiv Detail & Related papers (2023-04-23T03:39:50Z) - FluidLab: A Differentiable Environment for Benchmarking Complex Fluid
Manipulation [80.63838153351804]
We introduce FluidLab, a simulation environment with a diverse set of manipulation tasks involving complex fluid dynamics.
At the heart of our platform is a fully differentiable physics simulator, providing GPU-accelerated simulations and gradient calculations.
We propose several domain-specific optimization schemes coupled with differentiable physics.
arXiv Detail & Related papers (2023-03-04T07:24:22Z) - Deep Reinforcement Learning for Computational Fluid Dynamics on HPC
Systems [17.10464381844892]
Reinforcement learning (RL) is highly suitable for devising control strategies in the context of dynamical systems.
Recent research results indicate that RL-augmented computational fluid dynamics (CFD) solvers can exceed the current state of the art.
We present Relexi as a scalable RL framework that bridges the gap between machine learning and modern CFD solvers on HPC systems.
arXiv Detail & Related papers (2022-05-13T08:21:18Z) - Comparative analysis of machine learning methods for active flow control [60.53767050487434]
Genetic Programming (GP) and Reinforcement Learning (RL) are gaining popularity in flow control.
This work presents a comparative analysis of the two, bench-marking some of their most representative algorithms against global optimization techniques.
arXiv Detail & Related papers (2022-02-23T18:11:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.