Physics-informed Neural-operator Predictive Control for Drag Reduction in Turbulent Flows
- URL: http://arxiv.org/abs/2510.03360v1
- Date: Fri, 03 Oct 2025 00:18:26 GMT
- Title: Physics-informed Neural-operator Predictive Control for Drag Reduction in Turbulent Flows
- Authors: Zelin Zhao, Zongyi Li, Kimia Hassibi, Kamyar Azizzadenesheli, Junchi Yan, H. Jane Bae, Di Zhou, Anima Anandkumar,
- Abstract summary: We propose an efficient deep reinforcement learning framework for modeling and control of turbulent flows.<n>It is model-based RL for predictive control (PC), where both the policy and the observer models for turbulence control are learned jointly.<n>We find that PINO-PC achieves a drag reduction of 39.0% under a bulk-velocity Reynolds number of 15,000, outperforming previous fluid control methods by more than 32%.
- Score: 109.99020160824553
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Assessing turbulence control effects for wall friction numerically is a significant challenge since it requires expensive simulations of turbulent fluid dynamics. We instead propose an efficient deep reinforcement learning (RL) framework for modeling and control of turbulent flows. It is model-based RL for predictive control (PC), where both the policy and the observer models for turbulence control are learned jointly using Physics Informed Neural Operators (PINO), which are discretization invariant and can capture fine scales in turbulent flows accurately. Our PINO-PC outperforms prior model-free reinforcement learning methods in various challenging scenarios where the flows are of high Reynolds numbers and unseen, i.e., not provided during model training. We find that PINO-PC achieves a drag reduction of 39.0\% under a bulk-velocity Reynolds number of 15,000, outperforming previous fluid control methods by more than 32\%.
Related papers
- HydroGym: A Reinforcement Learning Platform for Fluid Dynamics [2.7789211666404228]
HydroGym is a solver-independent RL platform for flow control research.<n>Our platform includes 42 validated environments spanning from canonical laminar flows to complex three-dimensional turbulent scenarios.
arXiv Detail & Related papers (2025-12-19T12:58:06Z) - Active Control of Turbulent Airfoil Flows Using Adjoint-based Deep Learning [0.0]
We train active neural-network flow controllers to optimize lift-to-drag ratios in turbulent airfoil flows at Reynolds number $5times104$ and Mach number 0.4.<n>The trained flow controllers significantly improve the lift-to-drag ratios and reduce flow separation for both two- and three-dimensional air flows.
arXiv Detail & Related papers (2025-10-08T14:59:29Z) - Model-Based Reinforcement Learning for Control of Strongly-Disturbed Unsteady Aerodynamic Flows [0.0]
We propose a model-based reinforcement learning (MBRL) approach by incorporating a novel reduced-order model as a surrogate for the full environment.<n>The accuracy and robustness of the model are demonstrated in the scenario of a pitching airfoil within a highly disturbed environment.<n>An application to a vertical-axis wind turbine in a disturbance-free environment is discussed in the Appendix.
arXiv Detail & Related papers (2024-08-26T23:21:44Z) - SINDy-RL: Interpretable and Efficient Model-Based Reinforcement Learning [5.036739921794781]
SINDy-RL is a framework for combining SINDy and DRL to create efficient, interpretable, and trustworthy representations of the dynamics model, reward function, and control policy.<n>We demonstrate the effectiveness of our approaches on benchmark control environments and flow control problems.
arXiv Detail & Related papers (2024-03-14T05:17:39Z) - End-to-End Reinforcement Learning of Koopman Models for Economic Nonlinear Model Predictive Control [45.84205238554709]
We present a method for reinforcement learning of Koopman surrogate models for optimal performance as part of (e)NMPC.
We show that the end-to-end trained models outperform those trained using system identification in (e)NMPC.
arXiv Detail & Related papers (2023-08-03T10:21:53Z) - Towards Long-Term predictions of Turbulence using Neural Operators [68.8204255655161]
It aims to develop reduced-order/surrogate models for turbulent flow simulations using Machine Learning.
Different model structures are analyzed, with U-NET structures performing better than the standard FNO in accuracy and stability.
arXiv Detail & Related papers (2023-07-25T14:09:53Z) - How to Control Hydrodynamic Force on Fluidic Pinball via Deep
Reinforcement Learning [3.1635451288803638]
We present a DRL-based real-time feedback strategy to control the hydrodynamic force on fluidic pinball.
By adequately designing reward functions and encoding historical observations, the DRL-based control was shown to make reasonable and valid control decisions.
One of these results was analyzed by a machine learning model that enabled us to shed light on the basis of decision-making and physical mechanisms of the force tracking process.
arXiv Detail & Related papers (2023-04-23T03:39:50Z) - Real-Time Model-Free Deep Reinforcement Learning for Force Control of a
Series Elastic Actuator [56.11574814802912]
State-of-the art robotic applications utilize series elastic actuators (SEAs) with closed-loop force control to achieve complex tasks such as walking, lifting, and manipulation.
Model-free PID control methods are more prone to instability due to nonlinearities in the SEA.
Deep reinforcement learning has proved to be an effective model-free method for continuous control tasks.
arXiv Detail & Related papers (2023-04-11T00:51:47Z) - Improving and generalizing flow-based generative models with minibatch
optimal transport [90.01613198337833]
We introduce the generalized conditional flow matching (CFM) technique for continuous normalizing flows (CNFs)
CFM features a stable regression objective like that used to train the flow in diffusion models but enjoys the efficient inference of deterministic flow models.
A variant of our objective is optimal transport CFM (OT-CFM), which creates simpler flows that are more stable to train and lead to faster inference.
arXiv Detail & Related papers (2023-02-01T14:47:17Z) - Turbulence control in plane Couette flow using low-dimensional neural
ODE-based models and deep reinforcement learning [0.0]
"DManD-RL" (data-driven manifold dynamics-RL) generates a data-driven low-dimensional model of our system.
We train an RL control agent, yielding a 440-fold speedup over training on a numerical simulation.
The agent learns a policy that laminarizes 84% of unseen DNS test trajectories within 900 time units.
arXiv Detail & Related papers (2023-01-28T05:47:10Z) - Physics-Inspired Temporal Learning of Quadrotor Dynamics for Accurate
Model Predictive Trajectory Tracking [76.27433308688592]
Accurately modeling quadrotor's system dynamics is critical for guaranteeing agile, safe, and stable navigation.
We present a novel Physics-Inspired Temporal Convolutional Network (PI-TCN) approach to learning quadrotor's system dynamics purely from robot experience.
Our approach combines the expressive power of sparse temporal convolutions and dense feed-forward connections to make accurate system predictions.
arXiv Detail & Related papers (2022-06-07T13:51:35Z) - Interpretable Stochastic Model Predictive Control using Distributional
Reinforced Estimation for Quadrotor Tracking Systems [0.8411385346896411]
We present a novel trajectory tracker for autonomous quadrotor navigation in dynamic and complex environments.
The proposed framework integrates a distributional Reinforcement Learning estimator for unknown aerodynamic effects into a Model Predictive Controller.
We demonstrate our system to improve the cumulative tracking errors by at least 66% with unknown and diverse aerodynamic forces.
arXiv Detail & Related papers (2022-05-14T23:27:38Z) - A Numerical Proof of Shell Model Turbulence Closure [41.94295877935867]
We present a closure, based on deep recurrent neural networks, that quantitatively reproduces, within statistical errors, Eulerian and Lagrangian structure functions and the intermittent statistics of the energy cascade.
Our results encourage the development of similar approaches for 3D Navier-Stokes turbulence.
arXiv Detail & Related papers (2022-02-18T16:31:57Z) - Learning to Reweight Imaginary Transitions for Model-Based Reinforcement
Learning [58.66067369294337]
When the model is inaccurate or biased, imaginary trajectories may be deleterious for training the action-value and policy functions.
We adaptively reweight the imaginary transitions, so as to reduce the negative effects of poorly generated trajectories.
Our method outperforms state-of-the-art model-based and model-free RL algorithms on multiple tasks.
arXiv Detail & Related papers (2021-04-09T03:13:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.