Comparative analysis of machine learning methods for active flow control
- URL: http://arxiv.org/abs/2202.11664v2
- Date: Fri, 25 Feb 2022 08:38:56 GMT
- Title: Comparative analysis of machine learning methods for active flow control
- Authors: Fabio Pino, Lorenzo Schena, Jean Rabault, Alexander Kuhnle and Miguel
A. Mendez
- Abstract summary: Genetic Programming (GP) and Reinforcement Learning (RL) are gaining popularity in flow control.
This work presents a comparative analysis of the two, bench-marking some of their most representative algorithms against global optimization techniques.
- Score: 60.53767050487434
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine learning frameworks such as Genetic Programming (GP) and
Reinforcement Learning (RL) are gaining popularity in flow control. This work
presents a comparative analysis of the two, bench-marking some of their most
representative algorithms against global optimization techniques such as
Bayesian Optimization (BO) and Lipschitz global optimization (LIPO). First, we
review the general framework of the flow control problem, linking optimal
control theory with model-free machine learning methods. Then, we test the
control algorithms on three test cases. These are (1) the stabilization of a
nonlinear dynamical system featuring frequency cross-talk, (2) the wave
cancellation from a Burgers' flow and (3) the drag reduction in a cylinder wake
flow. Although the control of these problems has been tackled in the recent
literature with one method or the other, we present a comprehensive comparison
to illustrate their differences in exploration versus exploitation and their
balance between `model capacity' in the control law definition versus `required
complexity'. We believe that such a comparison opens the path towards
hybridization of the various methods, and we offer some perspective on their
future development in the literature of flow control problems.
Related papers
- Training Free Guided Flow Matching with Optimal Control [6.729886762762167]
We present OC-Flow, a training-free framework for guided flow matching using optimal control.
We show that OC-Flow achieved superior performance in experiments on text-guided image manipulation, conditional molecule generation, and all-atom peptide design.
arXiv Detail & Related papers (2024-10-23T17:53:11Z) - Comparison of Model Predictive Control and Proximal Policy Optimization for a 1-DOF Helicopter System [0.7499722271664147]
This study conducts a comparative analysis of Model Predictive Control (MPC) and Proximal Policy Optimization (PPO), a Deep Reinforcement Learning (DRL) algorithm, applied to a Quanser Aero 2 system.
PPO excels in rise-time and adaptability, making it a promising approach for applications requiring rapid response and adaptability.
arXiv Detail & Related papers (2024-08-28T08:35:34Z) - Sublinear Regret for a Class of Continuous-Time Linear--Quadratic Reinforcement Learning Problems [10.404992912881601]
We study reinforcement learning for a class of continuous-time linear-quadratic (LQ) control problems for diffusions.
We apply a model-free approach that relies neither on knowledge of model parameters nor on their estimations, and devise an actor-critic algorithm to learn the optimal policy parameter directly.
arXiv Detail & Related papers (2024-07-24T12:26:21Z) - Stochastic Optimal Control Matching [53.156277491861985]
Our work introduces Optimal Control Matching (SOCM), a novel Iterative Diffusion Optimization (IDO) technique for optimal control.
The control is learned via a least squares problem by trying to fit a matching vector field.
Experimentally, our algorithm achieves lower error than all the existing IDO techniques for optimal control.
arXiv Detail & Related papers (2023-12-04T16:49:43Z) - Towards a Theoretical Foundation of Policy Optimization for Learning
Control Policies [26.04704565406123]
Gradient-based methods have been widely used for system design and optimization in diverse application domains.
Recently, there has been a renewed interest in studying theoretical properties of these methods in the context of control and reinforcement learning.
This article surveys some of the recent developments on policy optimization, a gradient-based iterative approach for feedback control synthesis.
arXiv Detail & Related papers (2022-10-10T16:13:34Z) - On the Convergence of Distributed Stochastic Bilevel Optimization
Algorithms over a Network [55.56019538079826]
Bilevel optimization has been applied to a wide variety of machine learning models.
Most existing algorithms restrict their single-machine setting so that they are incapable of handling distributed data.
We develop novel decentralized bilevel optimization algorithms based on a gradient tracking communication mechanism and two different gradients.
arXiv Detail & Related papers (2022-06-30T05:29:52Z) - Sparsity in Partially Controllable Linear Systems [56.142264865866636]
We study partially controllable linear dynamical systems specified by an underlying sparsity pattern.
Our results characterize those state variables which are irrelevant for optimal control.
arXiv Detail & Related papers (2021-10-12T16:41:47Z) - Single-step deep reinforcement learning for open-loop control of laminar
and turbulent flows [0.0]
This research gauges the ability of deep reinforcement learning (DRL) techniques to assist the optimization and control of fluid mechanical systems.
It combines a novel, "degenerate" version of the prototypical policy optimization (PPO) algorithm, that trains a neural network in optimizing the system only once per learning episode.
arXiv Detail & Related papers (2020-06-04T16:11:26Z) - Adaptive Control and Regret Minimization in Linear Quadratic Gaussian
(LQG) Setting [91.43582419264763]
We propose LqgOpt, a novel reinforcement learning algorithm based on the principle of optimism in the face of uncertainty.
LqgOpt efficiently explores the system dynamics, estimates the model parameters up to their confidence interval, and deploys the controller of the most optimistic model.
arXiv Detail & Related papers (2020-03-12T19:56:38Z) - Information Theoretic Model Predictive Q-Learning [64.74041985237105]
We present a novel theoretical connection between information theoretic MPC and entropy regularized RL.
We develop a Q-learning algorithm that can leverage biased models.
arXiv Detail & Related papers (2019-12-31T00:29:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.