Dynamic Feature-based Deep Reinforcement Learning for Flow Control of Circular Cylinder with Sparse Surface Pressure Sensing
- URL: http://arxiv.org/abs/2307.01995v3
- Date: Sat, 1 Jun 2024 09:26:53 GMT
- Title: Dynamic Feature-based Deep Reinforcement Learning for Flow Control of Circular Cylinder with Sparse Surface Pressure Sensing
- Authors: Qiulei Wang, Lei Yan, Gang Hu, Wenli Chen, Jean Rabault, Bernd R. Noack,
- Abstract summary: This study proposes a self-learning algorithm for closed-loop cylinder wake control targeting lower drag and lower lift fluctuations.
The resulting dynamic feature-based DRL (DF-DRL) automatically learns a feedback control in the plant without a dynamic model.
- Score: 6.330823385793404
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This study proposes a self-learning algorithm for closed-loop cylinder wake control targeting lower drag and lower lift fluctuations with the additional challenge of sparse sensor information, taking deep reinforcement learning as the starting point. DRL performance is significantly improved by lifting the sensor signals to dynamic features (DF), which predict future flow states. The resulting dynamic feature-based DRL (DF-DRL) automatically learns a feedback control in the plant without a dynamic model. Results show that the drag coefficient of the DF-DRL model is 25% less than the vanilla model based on direct sensor feedback. More importantly, using only one surface pressure sensor, DF-DRL can reduce the drag coefficient to a state-of-the-art performance of about 8% at Re = 100 and significantly mitigate lift coefficient fluctuations. Hence, DF-DRL allows the deployment of sparse sensing of the flow without degrading the control performance. This method also shows good robustness in controlling flow under higher Reynolds numbers, which reduces the drag coefficient by 32.2% and 46.55% at Re = 500 and 1000, respectively, indicating the broad applicability of the method. Since surface pressure information is more straightforward to measure in realistic scenarios than flow velocity information, this study provides a valuable reference for experimentally designing the active flow control of a circular cylinder based on wall pressure signals, which is an essential step toward further developing intelligent control in realistic multi-input multi-output (MIMO) system.
Related papers
- Towards Active Flow Control Strategies Through Deep Reinforcement Learning [0.0]
This paper presents a deep reinforcement learning framework for active flow control (AFC) to reduce drag in aerodynamic bodies.
Tested on a 3D cylinder at Re = 100, the DRL approach achieved a 9.32% drag reduction and a 78.4% decrease in lift oscillations.
arXiv Detail & Related papers (2024-11-08T12:49:24Z) - ImDy: Human Inverse Dynamics from Imitated Observations [47.994797555884325]
Inverse dynamics (ID) aims at reproducing the driven torques from human kinematic observations.
Conventional optimization-based ID requires expensive laboratory setups, restricting its availability.
We propose to exploit the recently progressive human motion imitation algorithms to learn human inverse dynamics in a data-driven manner.
arXiv Detail & Related papers (2024-10-23T07:06:08Z) - Function Approximation for Reinforcement Learning Controller for Energy from Spread Waves [69.9104427437916]
Multi-generator Wave Energy Converters (WEC) must handle multiple simultaneous waves coming from different directions called spread waves.
These complex devices need controllers with multiple objectives of energy capture efficiency, reduction of structural stress to limit maintenance, and proactive protection against high waves.
In this paper, we explore different function approximations for the policy and critic networks in modeling the sequential nature of the system dynamics.
arXiv Detail & Related papers (2024-04-17T02:04:10Z) - Compressing Deep Reinforcement Learning Networks with a Dynamic
Structured Pruning Method for Autonomous Driving [63.155562267383864]
Deep reinforcement learning (DRL) has shown remarkable success in complex autonomous driving scenarios.
DRL models inevitably bring high memory consumption and computation, which hinders their wide deployment in resource-limited autonomous driving devices.
We introduce a novel dynamic structured pruning approach that gradually removes a DRL model's unimportant neurons during the training stage.
arXiv Detail & Related papers (2024-02-07T09:00:30Z) - Active Control of Flow over Rotating Cylinder by Multiple Jets using
Deep Reinforcement Learning [0.0]
In this paper, rotation will be added to the cylinder alongside the deep reinforcement learning (DRL) algorithm.
It is found that combining the rotation and DRL is promising since it suppresses the vortex shedding, stabilizes the Karman vortex street, and reduces the drag coefficient by up to 49.75%.
arXiv Detail & Related papers (2023-07-22T14:15:29Z) - Value function estimation using conditional diffusion models for control [62.27184818047923]
We propose a simple algorithm called Diffused Value Function (DVF)
It learns a joint multi-step model of the environment-robot interaction dynamics using a diffusion model.
We show how DVF can be used to efficiently capture the state visitation measure for multiple controllers.
arXiv Detail & Related papers (2023-06-09T18:40:55Z) - How to Control Hydrodynamic Force on Fluidic Pinball via Deep
Reinforcement Learning [3.1635451288803638]
We present a DRL-based real-time feedback strategy to control the hydrodynamic force on fluidic pinball.
By adequately designing reward functions and encoding historical observations, the DRL-based control was shown to make reasonable and valid control decisions.
One of these results was analyzed by a machine learning model that enabled us to shed light on the basis of decision-making and physical mechanisms of the force tracking process.
arXiv Detail & Related papers (2023-04-23T03:39:50Z) - Real-Time Model-Free Deep Reinforcement Learning for Force Control of a
Series Elastic Actuator [56.11574814802912]
State-of-the art robotic applications utilize series elastic actuators (SEAs) with closed-loop force control to achieve complex tasks such as walking, lifting, and manipulation.
Model-free PID control methods are more prone to instability due to nonlinearities in the SEA.
Deep reinforcement learning has proved to be an effective model-free method for continuous control tasks.
arXiv Detail & Related papers (2023-04-11T00:51:47Z) - Turbulence control in plane Couette flow using low-dimensional neural
ODE-based models and deep reinforcement learning [0.0]
"DManD-RL" (data-driven manifold dynamics-RL) generates a data-driven low-dimensional model of our system.
We train an RL control agent, yielding a 440-fold speedup over training on a numerical simulation.
The agent learns a policy that laminarizes 84% of unseen DNS test trajectories within 900 time units.
arXiv Detail & Related papers (2023-01-28T05:47:10Z) - High-bandwidth nonlinear control for soft actuators with recursive
network models [1.4174475093445231]
We present a high-bandwidth, lightweight, and nonlinear output tracking technique for soft actuators using Newton-Raphson.
This technique allows for reduced model sizes and increased control loop frequencies when compared with conventional RNN models.
arXiv Detail & Related papers (2021-01-04T18:12:41Z) - DUT-LFSaliency: Versatile Dataset and Light Field-to-RGB Saliency
Detection [104.50425501764806]
We introduce a large-scale dataset to enable versatile applications for light field saliency detection.
We present an asymmetrical two-stream model consisting of the Focal stream and RGB stream.
Experiments demonstrate that our Focal stream achieves state-of-the-arts performance.
arXiv Detail & Related papers (2020-12-30T11:53:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.