Deep Reinforcement Learning for Online Control of Stochastic Partial
Differential Equations
- URL: http://arxiv.org/abs/2110.11265v2
- Date: Sat, 23 Oct 2021 23:02:20 GMT
- Title: Deep Reinforcement Learning for Online Control of Stochastic Partial
Differential Equations
- Authors: Erfan Pirmorad, Faraz Khoshbakhtian, Farnam Mansouri, Amir-massoud
Farahmand
- Abstract summary: We formulate the problem of controlling partial differential equations as a reinforcement learning problem.
We present a learning-based, distributed control approach for online control of a system of SPDEs with high dimensional state-action space.
- Score: 10.746602033809943
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In many areas, such as the physical sciences, life sciences, and finance,
control approaches are used to achieve a desired goal in complex dynamical
systems governed by differential equations. In this work we formulate the
problem of controlling stochastic partial differential equations (SPDE) as a
reinforcement learning problem. We present a learning-based, distributed
control approach for online control of a system of SPDEs with high dimensional
state-action space using deep deterministic policy gradient method. We tested
the performance of our method on the problem of controlling the stochastic
Burgers' equation, describing a turbulent fluid flow in an infinitely large
domain.
Related papers
- A Generative Approach to Control Complex Physical Systems [16.733151963652244]
We introduce Diffusion Physical systems Control (DiffPhyCon), a new class of method to address the physical systems control problem.
DiffPhyCon excels by simultaneously minimizing both the learned generative energy function and the predefined control objectives.
We test our method in 1D Burgers' equation and 2D jellyfish movement control in a fluid environment.
arXiv Detail & Related papers (2024-07-09T01:56:23Z) - GRAPE optimization for open quantum systems with time-dependent
decoherence rates driven by coherent and incoherent controls [77.34726150561087]
The GRadient Ascent Pulse Engineering (GRAPE) method is widely used for optimization in quantum control.
We adopt GRAPE method for optimizing objective functionals for open quantum systems driven by both coherent and incoherent controls.
The efficiency of the algorithm is demonstrated through numerical simulations for the state-to-state transition problem.
arXiv Detail & Related papers (2023-07-17T13:37:18Z) - Guaranteed Conservation of Momentum for Learning Particle-based Fluid
Dynamics [96.9177297872723]
We present a novel method for guaranteeing linear momentum in learned physics simulations.
We enforce conservation of momentum with a hard constraint, which we realize via antisymmetrical continuous convolutional layers.
In combination, the proposed method allows us to increase the physical accuracy of the learned simulator substantially.
arXiv Detail & Related papers (2022-10-12T09:12:59Z) - Deep Reinforcement Learning for Adaptive Mesh Refinement [0.9281671380673306]
We train policy networks for AMR strategy directly from numerical simulation.
The training process does not require an exact solution or a high-fidelity ground truth to the partial differential equation.
We show that the deep reinforcement learning policies are competitive with common AMRs, generalize well across problem classes, and strike a favorable balance between accuracy and cost.
arXiv Detail & Related papers (2022-09-25T23:45:34Z) - Semi-supervised Learning of Partial Differential Operators and Dynamical
Flows [68.77595310155365]
We present a novel method that combines a hyper-network solver with a Fourier Neural Operator architecture.
We test our method on various time evolution PDEs, including nonlinear fluid flows in one, two, and three spatial dimensions.
The results show that the new method improves the learning accuracy at the time point of supervision point, and is able to interpolate and the solutions to any intermediate time.
arXiv Detail & Related papers (2022-07-28T19:59:14Z) - Multisymplectic Formulation of Deep Learning Using Mean--Field Type
Control and Nonlinear Stability of Training Algorithm [0.0]
We formulate training of deep neural networks as a hydrodynamics system with a multisymplectic structure.
For that, the deep neural network is modelled using a differential equation and, thereby, mean-field type control is used to train it.
The numerical scheme, yields an approximated solution which is also an exact solution of a hydrodynamics system with a multisymplectic structure.
arXiv Detail & Related papers (2022-07-07T23:14:12Z) - Physics-constrained Unsupervised Learning of Partial Differential
Equations using Meshes [1.066048003460524]
Graph neural networks show promise in accurately representing irregularly meshed objects and learning their dynamics.
In this work, we represent meshes naturally as graphs, process these using Graph Networks, and formulate our physics-based loss to provide an unsupervised learning framework for partial differential equations (PDE)
Our framework will enable the application of PDE solvers in interactive settings, such as model-based control of soft-body deformations.
arXiv Detail & Related papers (2022-03-30T19:22:56Z) - Deep Learning Approximation of Diffeomorphisms via Linear-Control
Systems [91.3755431537592]
We consider a control system of the form $dot x = sum_i=1lF_i(x)u_i$, with linear dependence in the controls.
We use the corresponding flow to approximate the action of a diffeomorphism on a compact ensemble of points.
arXiv Detail & Related papers (2021-10-24T08:57:46Z) - DySMHO: Data-Driven Discovery of Governing Equations for Dynamical
Systems via Moving Horizon Optimization [77.34726150561087]
We introduce Discovery of Dynamical Systems via Moving Horizon Optimization (DySMHO), a scalable machine learning framework.
DySMHO sequentially learns the underlying governing equations from a large dictionary of basis functions.
Canonical nonlinear dynamical system examples are used to demonstrate that DySMHO can accurately recover the governing laws.
arXiv Detail & Related papers (2021-07-30T20:35:03Z) - Learning to Control PDEs with Differentiable Physics [102.36050646250871]
We present a novel hierarchical predictor-corrector scheme which enables neural networks to learn to understand and control complex nonlinear physical systems over long time frames.
We demonstrate that our method successfully develops an understanding of complex physical systems and learns to control them for tasks involving PDEs.
arXiv Detail & Related papers (2020-01-21T11:58:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.