Comparative analysis of machine learning methods for active flow control
- URL: http://arxiv.org/abs/2202.11664v2
- Date: Fri, 25 Feb 2022 08:38:56 GMT
- Title: Comparative analysis of machine learning methods for active flow control
- Authors: Fabio Pino, Lorenzo Schena, Jean Rabault, Alexander Kuhnle and Miguel
A. Mendez
- Abstract summary: Genetic Programming (GP) and Reinforcement Learning (RL) are gaining popularity in flow control.
This work presents a comparative analysis of the two, bench-marking some of their most representative algorithms against global optimization techniques.
- Score: 60.53767050487434
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine learning frameworks such as Genetic Programming (GP) and
Reinforcement Learning (RL) are gaining popularity in flow control. This work
presents a comparative analysis of the two, bench-marking some of their most
representative algorithms against global optimization techniques such as
Bayesian Optimization (BO) and Lipschitz global optimization (LIPO). First, we
review the general framework of the flow control problem, linking optimal
control theory with model-free machine learning methods. Then, we test the
control algorithms on three test cases. These are (1) the stabilization of a
nonlinear dynamical system featuring frequency cross-talk, (2) the wave
cancellation from a Burgers' flow and (3) the drag reduction in a cylinder wake
flow. Although the control of these problems has been tackled in the recent
literature with one method or the other, we present a comprehensive comparison
to illustrate their differences in exploration versus exploitation and their
balance between `model capacity' in the control law definition versus `required
complexity'. We believe that such a comparison opens the path towards
hybridization of the various methods, and we offer some perspective on their
future development in the literature of flow control problems.
Related papers
- Sublinear Regret for An Actor-Critic Algorithm in Continuous-Time Linear-Quadratic Reinforcement Learning [10.404992912881601]
We study reinforcement learning for a class of continuous-time linear-quadratic (LQ) control problems for diffusions where volatility of the state processes depends on both state and control variables.
We apply a model-free approach that relies neither on knowledge of model parameters nor on their estimations, and devise an actor-critic algorithm to learn the optimal policy parameter directly.
arXiv Detail & Related papers (2024-07-24T12:26:21Z) - Stochastic Optimal Control Matching [53.156277491861985]
Our work introduces Optimal Control Matching (SOCM), a novel Iterative Diffusion Optimization (IDO) technique for optimal control.
The control is learned via a least squares problem by trying to fit a matching vector field.
Experimentally, our algorithm achieves lower error than all the existing IDO techniques for optimal control.
arXiv Detail & Related papers (2023-12-04T16:49:43Z) - Active flow control for three-dimensional cylinders through deep
reinforcement learning [0.0]
This paper presents for the first time successful results of active flow control with multiple zero-net-mass-flux synthetic jets.
The jets are placed on a three-dimensional cylinder along its span with the aim of reducing the drag coefficient.
The method is based on a deep-reinforcement-learning framework that couples a computational-fluid-dynamics solver with an agent.
arXiv Detail & Related papers (2023-09-04T13:30:29Z) - Towards a Theoretical Foundation of Policy Optimization for Learning
Control Policies [26.04704565406123]
Gradient-based methods have been widely used for system design and optimization in diverse application domains.
Recently, there has been a renewed interest in studying theoretical properties of these methods in the context of control and reinforcement learning.
This article surveys some of the recent developments on policy optimization, a gradient-based iterative approach for feedback control synthesis.
arXiv Detail & Related papers (2022-10-10T16:13:34Z) - On the Convergence of Distributed Stochastic Bilevel Optimization
Algorithms over a Network [55.56019538079826]
Bilevel optimization has been applied to a wide variety of machine learning models.
Most existing algorithms restrict their single-machine setting so that they are incapable of handling distributed data.
We develop novel decentralized bilevel optimization algorithms based on a gradient tracking communication mechanism and two different gradients.
arXiv Detail & Related papers (2022-06-30T05:29:52Z) - Deep reinforcement learning for optimal well control in subsurface
systems with uncertain geology [0.0]
A general control policy framework based on deep reinforcement learning (DRL) is introduced for closed-loop decision making in subsurface flow settings.
The DRL-based methodology is shown to result in an NPV increase of 15% (for the 2D cases) and 33% (3D cases) relative to robust optimization over prior models.
arXiv Detail & Related papers (2022-03-24T22:50:47Z) - Sparsity in Partially Controllable Linear Systems [56.142264865866636]
We study partially controllable linear dynamical systems specified by an underlying sparsity pattern.
Our results characterize those state variables which are irrelevant for optimal control.
arXiv Detail & Related papers (2021-10-12T16:41:47Z) - Single-step deep reinforcement learning for open-loop control of laminar
and turbulent flows [0.0]
This research gauges the ability of deep reinforcement learning (DRL) techniques to assist the optimization and control of fluid mechanical systems.
It combines a novel, "degenerate" version of the prototypical policy optimization (PPO) algorithm, that trains a neural network in optimizing the system only once per learning episode.
arXiv Detail & Related papers (2020-06-04T16:11:26Z) - Adaptive Control and Regret Minimization in Linear Quadratic Gaussian
(LQG) Setting [91.43582419264763]
We propose LqgOpt, a novel reinforcement learning algorithm based on the principle of optimism in the face of uncertainty.
LqgOpt efficiently explores the system dynamics, estimates the model parameters up to their confidence interval, and deploys the controller of the most optimistic model.
arXiv Detail & Related papers (2020-03-12T19:56:38Z) - Information Theoretic Model Predictive Q-Learning [64.74041985237105]
We present a novel theoretical connection between information theoretic MPC and entropy regularized RL.
We develop a Q-learning algorithm that can leverage biased models.
arXiv Detail & Related papers (2019-12-31T00:29:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.