Active flow control for three-dimensional cylinders through deep
reinforcement learning
- URL: http://arxiv.org/abs/2309.02462v1
- Date: Mon, 4 Sep 2023 13:30:29 GMT
- Title: Active flow control for three-dimensional cylinders through deep
reinforcement learning
- Authors: Pol Su\'arez, Francisco Alc\'antara-\'Avila, Arnau Mir\'o, Jean
Rabault, Bernat Font, Oriol Lehmkuhl and R. Vinuesa
- Abstract summary: This paper presents for the first time successful results of active flow control with multiple zero-net-mass-flux synthetic jets.
The jets are placed on a three-dimensional cylinder along its span with the aim of reducing the drag coefficient.
The method is based on a deep-reinforcement-learning framework that couples a computational-fluid-dynamics solver with an agent.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper presents for the first time successful results of active flow
control with multiple independently controlled zero-net-mass-flux synthetic
jets. The jets are placed on a three-dimensional cylinder along its span with
the aim of reducing the drag coefficient. The method is based on a
deep-reinforcement-learning framework that couples a
computational-fluid-dynamics solver with an agent using the
proximal-policy-optimization algorithm. We implement a multi-agent
reinforcement-learning framework which offers numerous advantages: it exploits
local invariants, makes the control adaptable to different geometries,
facilitates transfer learning and cross-application of agents and results in
significant training speedup. In this contribution we report significant drag
reduction after applying the DRL-based control in three different
configurations of the problem.
Related papers
- Growing Q-Networks: Solving Continuous Control Tasks with Adaptive Control Resolution [51.83951489847344]
In robotics applications, smooth control signals are commonly preferred to reduce system wear and energy efficiency.
In this work, we aim to bridge this performance gap by growing discrete action spaces from coarse to fine control resolution.
Our work indicates that an adaptive control resolution in combination with value decomposition yields simple critic-only algorithms that yield surprisingly strong performance on continuous control tasks.
arXiv Detail & Related papers (2024-04-05T17:58:37Z) - Improving a Proportional Integral Controller with Reinforcement Learning on a Throttle Valve Benchmark [2.8322124733515666]
This paper presents a learning-based control strategy for non-linear throttle valves with an asymmetric controller.
We exploit the recent advances in Reinforcement Learning with Guides to improve the closed-loop behavior by learning from the additional interactions with the valve.
In all the experimental test cases, the resulting agent has a better sample efficiency than traditional RL agents and outperforms the PI controller.
arXiv Detail & Related papers (2024-02-21T09:40:26Z) - How to Control Hydrodynamic Force on Fluidic Pinball via Deep
Reinforcement Learning [3.1635451288803638]
We present a DRL-based real-time feedback strategy to control the hydrodynamic force on fluidic pinball.
By adequately designing reward functions and encoding historical observations, the DRL-based control was shown to make reasonable and valid control decisions.
One of these results was analyzed by a machine learning model that enabled us to shed light on the basis of decision-making and physical mechanisms of the force tracking process.
arXiv Detail & Related papers (2023-04-23T03:39:50Z) - Lyapunov Function Consistent Adaptive Network Signal Control with Back
Pressure and Reinforcement Learning [9.797994846439527]
This study introduces a unified framework using Lyapunov control theory, defining specific Lyapunov functions respectively.
Building on insights from Lyapunov theory, this study designs a reward function for the Reinforcement Learning (RL)-based network signal control.
The proposed algorithm is compared with several traditional and RL-based methods under pure passenger car flow and heterogenous traffic flow including freight.
arXiv Detail & Related papers (2022-10-06T00:22:02Z) - Unified Automatic Control of Vehicular Systems with Reinforcement
Learning [64.63619662693068]
This article contributes a streamlined methodology for vehicular microsimulation.
It discovers high performance control strategies with minimal manual design.
The study reveals numerous emergent behaviors resembling wave mitigation, traffic signaling, and ramp metering.
arXiv Detail & Related papers (2022-07-30T16:23:45Z) - Comparative analysis of machine learning methods for active flow control [60.53767050487434]
Genetic Programming (GP) and Reinforcement Learning (RL) are gaining popularity in flow control.
This work presents a comparative analysis of the two, bench-marking some of their most representative algorithms against global optimization techniques.
arXiv Detail & Related papers (2022-02-23T18:11:19Z) - Efficient Differentiable Simulation of Articulated Bodies [89.64118042429287]
We present a method for efficient differentiable simulation of articulated bodies.
This enables integration of articulated body dynamics into deep learning frameworks.
We show that reinforcement learning with articulated systems can be accelerated using gradients provided by our method.
arXiv Detail & Related papers (2021-09-16T04:48:13Z) - Multi-Agent Reinforcement Learning in NOMA-aided UAV Networks for
Cellular Offloading [59.32570888309133]
A novel framework is proposed for cellular offloading with the aid of multiple unmanned aerial vehicles (UAVs)
Non-orthogonal multiple access (NOMA) technique is employed at each UAV to further improve the spectrum efficiency of the wireless network.
A mutual deep Q-network (MDQN) algorithm is proposed to jointly determine the optimal 3D trajectory and power allocation of UAVs.
arXiv Detail & Related papers (2020-10-18T20:22:05Z) - Single-step deep reinforcement learning for open-loop control of laminar
and turbulent flows [0.0]
This research gauges the ability of deep reinforcement learning (DRL) techniques to assist the optimization and control of fluid mechanical systems.
It combines a novel, "degenerate" version of the prototypical policy optimization (PPO) algorithm, that trains a neural network in optimizing the system only once per learning episode.
arXiv Detail & Related papers (2020-06-04T16:11:26Z) - Discrete Action On-Policy Learning with Action-Value Critic [72.20609919995086]
Reinforcement learning (RL) in discrete action space is ubiquitous in real-world applications, but its complexity grows exponentially with the action-space dimension.
We construct a critic to estimate action-value functions, apply it on correlated actions, and combine these critic estimated action values to control the variance of gradient estimation.
These efforts result in a new discrete action on-policy RL algorithm that empirically outperforms related on-policy algorithms relying on variance control techniques.
arXiv Detail & Related papers (2020-02-10T04:23:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.