Deep Reinforcement Learning for Adaptive Mesh Refinement
- URL: http://arxiv.org/abs/2209.12351v1
- Date: Sun, 25 Sep 2022 23:45:34 GMT
- Title: Deep Reinforcement Learning for Adaptive Mesh Refinement
- Authors: Corbin Foucart, Aaron Charous, Pierre F.J. Lermusiaux
- Abstract summary: We train policy networks for AMR strategy directly from numerical simulation.
The training process does not require an exact solution or a high-fidelity ground truth to the partial differential equation.
We show that the deep reinforcement learning policies are competitive with common AMRs, generalize well across problem classes, and strike a favorable balance between accuracy and cost.
- Score: 0.9281671380673306
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Finite element discretizations of problems in computational physics often
rely on adaptive mesh refinement (AMR) to preferentially resolve regions
containing important features during simulation. However, these spatial
refinement strategies are often heuristic and rely on domain-specific knowledge
or trial-and-error. We treat the process of adaptive mesh refinement as a
local, sequential decision-making problem under incomplete information,
formulating AMR as a partially observable Markov decision process. Using a deep
reinforcement learning approach, we train policy networks for AMR strategy
directly from numerical simulation. The training process does not require an
exact solution or a high-fidelity ground truth to the partial differential
equation at hand, nor does it require a pre-computed training dataset. The
local nature of our reinforcement learning formulation allows the policy
network to be trained inexpensively on much smaller problems than those on
which they are deployed. The methodology is not specific to any particular
partial differential equation, problem dimension, or numerical discretization,
and can flexibly incorporate diverse problem physics. To that end, we apply the
approach to a diverse set of partial differential equations, using a variety of
high-order discontinuous Galerkin and hybridizable discontinuous Galerkin
finite element discretizations. We show that the resultant deep reinforcement
learning policies are competitive with common AMR heuristics, generalize well
across problem classes, and strike a favorable balance between accuracy and
cost such that they often lead to a higher accuracy per problem degree of
freedom.
Related papers
- Learning Controlled Stochastic Differential Equations [61.82896036131116]
This work proposes a novel method for estimating both drift and diffusion coefficients of continuous, multidimensional, nonlinear controlled differential equations with non-uniform diffusion.
We provide strong theoretical guarantees, including finite-sample bounds for (L2), (Linfty), and risk metrics, with learning rates adaptive to coefficients' regularity.
Our method is available as an open-source Python library.
arXiv Detail & Related papers (2024-11-04T11:09:58Z) - Adaptive Swarm Mesh Refinement using Deep Reinforcement Learning with Local Rewards [12.455977048107671]
Adaptive Mesh Refinement (AMR) improves the Finite Element Method (FEM)
We formulate AMR as a system of collaborating, homogeneous agents that iteratively split into multiple new agents.
Our approach, Adaptive Swarm Mesh Refinement (ASMR), offers efficient, stable optimization and generates highly adaptive meshes at user-defined resolution during inference.
arXiv Detail & Related papers (2024-06-12T17:26:54Z) - Optimizing Solution-Samplers for Combinatorial Problems: The Landscape
of Policy-Gradient Methods [52.0617030129699]
We introduce a novel theoretical framework for analyzing the effectiveness of DeepMatching Networks and Reinforcement Learning methods.
Our main contribution holds for a broad class of problems including Max-and Min-Cut, Max-$k$-Bipartite-Bi, Maximum-Weight-Bipartite-Bi, and Traveling Salesman Problem.
As a byproduct of our analysis we introduce a novel regularization process over vanilla descent and provide theoretical and experimental evidence that it helps address vanishing-gradient issues and escape bad stationary points.
arXiv Detail & Related papers (2023-10-08T23:39:38Z) - Multi-Agent Reinforcement Learning for Adaptive Mesh Refinement [17.72127385405445]
We present a novel formulation of adaptive mesh refinement (AMR) as a fully-cooperative Markov game.
We design a novel deep multi-agent reinforcement learning algorithm called Value Decomposition Graph Network (VDGN)
We show that VDGN policies significantly outperform error threshold-based policies in global error and cost metrics.
arXiv Detail & Related papers (2022-11-02T00:41:32Z) - Guaranteed Conservation of Momentum for Learning Particle-based Fluid
Dynamics [96.9177297872723]
We present a novel method for guaranteeing linear momentum in learned physics simulations.
We enforce conservation of momentum with a hard constraint, which we realize via antisymmetrical continuous convolutional layers.
In combination, the proposed method allows us to increase the physical accuracy of the learned simulator substantially.
arXiv Detail & Related papers (2022-10-12T09:12:59Z) - Learning High-Dimensional McKean-Vlasov Forward-Backward Stochastic
Differential Equations with General Distribution Dependence [6.253771639590562]
We propose a novel deep learning method for computing MV-FBSDEs with a general form of mean-field interactions.
We use deep neural networks to solve standard BSDEs and approximate coefficient functions in order to solve high-dimensional MV-FBSDEs.
arXiv Detail & Related papers (2022-04-25T18:59:33Z) - Efficient Model-Based Multi-Agent Mean-Field Reinforcement Learning [89.31889875864599]
We propose an efficient model-based reinforcement learning algorithm for learning in multi-agent systems.
Our main theoretical contributions are the first general regret bounds for model-based reinforcement learning for MFC.
We provide a practical parametrization of the core optimization problem.
arXiv Detail & Related papers (2021-07-08T18:01:02Z) - Decentralized Local Stochastic Extra-Gradient for Variational
Inequalities [125.62877849447729]
We consider distributed variational inequalities (VIs) on domains with the problem data that is heterogeneous (non-IID) and distributed across many devices.
We make a very general assumption on the computational network that covers the settings of fully decentralized calculations.
We theoretically analyze its convergence rate in the strongly-monotone, monotone, and non-monotone settings.
arXiv Detail & Related papers (2021-06-15T17:45:51Z) - Reinforcement Learning for Adaptive Mesh Refinement [63.7867809197671]
We propose a novel formulation of AMR as a Markov decision process and apply deep reinforcement learning to train refinement policies directly from simulation.
The model sizes of these policy architectures are independent of the mesh size and hence scale to arbitrarily large and complex simulations.
arXiv Detail & Related papers (2021-03-01T22:55:48Z) - A hybrid MGA-MSGD ANN training approach for approximate solution of
linear elliptic PDEs [0.0]
We introduce a hybrid "Modified Genetic-Multilevel Gradient Descent" (MGA-MSGD) training algorithm.
It considerably improves accuracy and efficiency of solving 3D mechanical problems described, in strong-form, by PDEs via ANNs.
arXiv Detail & Related papers (2020-12-18T10:59:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.