How to Control Hydrodynamic Force on Fluidic Pinball via Deep
Reinforcement Learning
- URL: http://arxiv.org/abs/2304.11526v1
- Date: Sun, 23 Apr 2023 03:39:50 GMT
- Title: How to Control Hydrodynamic Force on Fluidic Pinball via Deep
Reinforcement Learning
- Authors: Haodong Feng, Yue Wang, Hui Xiang, Zhiyang Jin, Dixia Fan
- Abstract summary: We present a DRL-based real-time feedback strategy to control the hydrodynamic force on fluidic pinball.
By adequately designing reward functions and encoding historical observations, the DRL-based control was shown to make reasonable and valid control decisions.
One of these results was analyzed by a machine learning model that enabled us to shed light on the basis of decision-making and physical mechanisms of the force tracking process.
- Score: 3.1635451288803638
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep reinforcement learning (DRL) for fluidic pinball, three individually
rotating cylinders in the uniform flow arranged in an equilaterally triangular
configuration, can learn the efficient flow control strategies due to the
validity of self-learning and data-driven state estimation for complex fluid
dynamic problems. In this work, we present a DRL-based real-time feedback
strategy to control the hydrodynamic force on fluidic pinball, i.e., force
extremum and tracking, from cylinders' rotation. By adequately designing reward
functions and encoding historical observations, and after automatic learning of
thousands of iterations, the DRL-based control was shown to make reasonable and
valid control decisions in nonparametric control parameter space, which is
comparable to and even better than the optimal policy found through lengthy
brute-force searching. Subsequently, one of these results was analyzed by a
machine learning model that enabled us to shed light on the basis of
decision-making and physical mechanisms of the force tracking process. The
finding from this work can control hydrodynamic force on the operation of
fluidic pinball system and potentially pave the way for exploring efficient
active flow control strategies in other complex fluid dynamic problems.
Related papers
- Aquatic Navigation: A Challenging Benchmark for Deep Reinforcement Learning [53.3760591018817]
We propose a new benchmarking environment for aquatic navigation using recent advances in the integration between game engines and Deep Reinforcement Learning.
Specifically, we focus on PPO, one of the most widely accepted algorithms, and we propose advanced training techniques.
Our empirical evaluation shows that a well-designed combination of these ingredients can achieve promising results.
arXiv Detail & Related papers (2024-05-30T23:20:23Z) - Asynchronous Parallel Reinforcement Learning for Optimizing Propulsive
Performance in Fin Ray Control [3.889677386753812]
Fish fin rays constitute a sophisticated control system for ray-finned fish, facilitating versatile locomotion.
Despite extensive research on the kinematics and hydrodynamics of fish locomotion, the intricate control strategies in fin-ray actuation remain largely unexplored.
This study introduces a cutting-edge off-policy DRL algorithm, interacting with a fluid-structure interaction (FSI) environment to acquire intricate fin-ray control strategies tailored for various propulsive performance objectives.
arXiv Detail & Related papers (2024-01-21T00:06:17Z) - Active flow control for three-dimensional cylinders through deep
reinforcement learning [0.0]
This paper presents for the first time successful results of active flow control with multiple zero-net-mass-flux synthetic jets.
The jets are placed on a three-dimensional cylinder along its span with the aim of reducing the drag coefficient.
The method is based on a deep-reinforcement-learning framework that couples a computational-fluid-dynamics solver with an agent.
arXiv Detail & Related papers (2023-09-04T13:30:29Z) - RL + Model-based Control: Using On-demand Optimal Control to Learn Versatile Legged Locomotion [16.800984476447624]
This paper presents a control framework that combines model-based optimal control and reinforcement learning.
We validate the robustness and controllability of the framework through a series of experiments.
Our framework effortlessly supports the training of control policies for robots with diverse dimensions.
arXiv Detail & Related papers (2023-05-29T01:33:55Z) - Real-Time Model-Free Deep Reinforcement Learning for Force Control of a
Series Elastic Actuator [56.11574814802912]
State-of-the art robotic applications utilize series elastic actuators (SEAs) with closed-loop force control to achieve complex tasks such as walking, lifting, and manipulation.
Model-free PID control methods are more prone to instability due to nonlinearities in the SEA.
Deep reinforcement learning has proved to be an effective model-free method for continuous control tasks.
arXiv Detail & Related papers (2023-04-11T00:51:47Z) - Turbulence control in plane Couette flow using low-dimensional neural
ODE-based models and deep reinforcement learning [0.0]
"DManD-RL" (data-driven manifold dynamics-RL) generates a data-driven low-dimensional model of our system.
We train an RL control agent, yielding a 440-fold speedup over training on a numerical simulation.
The agent learns a policy that laminarizes 84% of unseen DNS test trajectories within 900 time units.
arXiv Detail & Related papers (2023-01-28T05:47:10Z) - Unified Automatic Control of Vehicular Systems with Reinforcement
Learning [64.63619662693068]
This article contributes a streamlined methodology for vehicular microsimulation.
It discovers high performance control strategies with minimal manual design.
The study reveals numerous emergent behaviors resembling wave mitigation, traffic signaling, and ramp metering.
arXiv Detail & Related papers (2022-07-30T16:23:45Z) - Physics-Inspired Temporal Learning of Quadrotor Dynamics for Accurate
Model Predictive Trajectory Tracking [76.27433308688592]
Accurately modeling quadrotor's system dynamics is critical for guaranteeing agile, safe, and stable navigation.
We present a novel Physics-Inspired Temporal Convolutional Network (PI-TCN) approach to learning quadrotor's system dynamics purely from robot experience.
Our approach combines the expressive power of sparse temporal convolutions and dense feed-forward connections to make accurate system predictions.
arXiv Detail & Related papers (2022-06-07T13:51:35Z) - Accelerated Policy Learning with Parallel Differentiable Simulation [59.665651562534755]
We present a differentiable simulator and a new policy learning algorithm (SHAC)
Our algorithm alleviates problems with local minima through a smooth critic function.
We show substantial improvements in sample efficiency and wall-clock time over state-of-the-art RL and differentiable simulation-based algorithms.
arXiv Detail & Related papers (2022-04-14T17:46:26Z) - Comparative analysis of machine learning methods for active flow control [60.53767050487434]
Genetic Programming (GP) and Reinforcement Learning (RL) are gaining popularity in flow control.
This work presents a comparative analysis of the two, bench-marking some of their most representative algorithms against global optimization techniques.
arXiv Detail & Related papers (2022-02-23T18:11:19Z) - Control of a fly-mimicking flyer in complex flow using deep
reinforcement learning [0.12891210250935145]
An integrated framework of computational fluid-structural dynamics (CFD-CSD) and deep reinforcement learning (deep-RL) is developed for control of a fly-scale flexible-winged flyer in complex flow.
To obtain accurate data, the CFD-CSD is adopted for precisely predicting the dynamics.
To gain ample data, a novel data reproduction method is devised, where the obtained data are replicated for various situations.
arXiv Detail & Related papers (2021-11-04T04:48:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.