Reinforcement Learning for Adaptive Mesh Refinement
- URL: http://arxiv.org/abs/2103.01342v1
- Date: Mon, 1 Mar 2021 22:55:48 GMT
- Title: Reinforcement Learning for Adaptive Mesh Refinement
- Authors: Jiachen Yang, Tarik Dzanic, Brenden Petersen, Jun Kudo, Ketan Mittal,
Vladimir Tomov, Jean-Sylvain Camier, Tuo Zhao, Hongyuan Zha, Tzanio Kolev,
Robert Anderson, Daniel Faissol
- Abstract summary: We propose a novel formulation of AMR as a Markov decision process and apply deep reinforcement learning to train refinement policies directly from simulation.
The model sizes of these policy architectures are independent of the mesh size and hence scale to arbitrarily large and complex simulations.
- Score: 63.7867809197671
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large-scale finite element simulations of complex physical systems governed
by partial differential equations crucially depend on adaptive mesh refinement
(AMR) to allocate computational budget to regions where higher resolution is
required. Existing scalable AMR methods make heuristic refinement decisions
based on instantaneous error estimation and thus do not aim for long-term
optimality over an entire simulation. We propose a novel formulation of AMR as
a Markov decision process and apply deep reinforcement learning (RL) to train
refinement policies directly from simulation. AMR poses a new problem for RL in
that both the state dimension and available action set changes at every step,
which we solve by proposing new policy architectures with differing generality
and inductive bias. The model sizes of these policy architectures are
independent of the mesh size and hence scale to arbitrarily large and complex
simulations. We demonstrate in comprehensive experiments on static function
estimation and the advection of different fields that RL policies can be
competitive with a widely-used error estimator and generalize to larger, more
complex, and unseen test problems.
Related papers
- A domain decomposition-based autoregressive deep learning model for unsteady and nonlinear partial differential equations [2.7755345520127936]
We propose a domain-decomposition-based deep learning (DL) framework, named CoMLSim, for accurately modeling unsteady and nonlinear partial differential equations (PDEs)
The framework consists of two key components: (a) a convolutional neural network (CNN)-based autoencoder architecture and (b) an autoregressive model composed of fully connected layers.
arXiv Detail & Related papers (2024-08-26T17:50:47Z) - Reinforcement learning for anisotropic p-adaptation and error estimation in high-order solvers [0.37109226820205005]
We present a novel approach to automate and optimize anisotropic p-adaptation in high-order h/p using Reinforcement Learning (RL)
We develop an offline training approach, decoupled from the main solver, which shows minimal overcost when performing simulations.
We derive an inexpensive RL-based error estimation approach that enables the quantification of local discretization errors.
arXiv Detail & Related papers (2024-07-26T17:55:23Z) - Adaptive Swarm Mesh Refinement using Deep Reinforcement Learning with Local Rewards [12.455977048107671]
Adaptive Mesh Refinement (AMR) improves the Finite Element Method (FEM)
We formulate AMR as a system of collaborating, homogeneous agents that iteratively split into multiple new agents.
Our approach, Adaptive Swarm Mesh Refinement (ASMR), offers efficient, stable optimization and generates highly adaptive meshes at user-defined resolution during inference.
arXiv Detail & Related papers (2024-06-12T17:26:54Z) - Two-Stage ML-Guided Decision Rules for Sequential Decision Making under Uncertainty [55.06411438416805]
Sequential Decision Making under Uncertainty (SDMU) is ubiquitous in many domains such as energy, finance, and supply chains.
Some SDMU are naturally modeled as Multistage Problems (MSPs) but the resulting optimizations are notoriously challenging from a computational standpoint.
This paper introduces a novel approach Two-Stage General Decision Rules (TS-GDR) to generalize the policy space beyond linear functions.
The effectiveness of TS-GDR is demonstrated through an instantiation using Deep Recurrent Neural Networks named Two-Stage Deep Decision Rules (TS-LDR)
arXiv Detail & Related papers (2024-05-23T18:19:47Z) - Distributionally Robust Model-based Reinforcement Learning with Large
State Spaces [55.14361269378122]
Three major challenges in reinforcement learning are the complex dynamical systems with large state spaces, the costly data acquisition processes, and the deviation of real-world dynamics from the training environment deployment.
We study distributionally robust Markov decision processes with continuous state spaces under the widely used Kullback-Leibler, chi-square, and total variation uncertainty sets.
We propose a model-based approach that utilizes Gaussian Processes and the maximum variance reduction algorithm to efficiently learn multi-output nominal transition dynamics.
arXiv Detail & Related papers (2023-09-05T13:42:11Z) - GEC: A Unified Framework for Interactive Decision Making in MDP, POMDP,
and Beyond [101.5329678997916]
We study sample efficient reinforcement learning (RL) under the general framework of interactive decision making.
We propose a novel complexity measure, generalized eluder coefficient (GEC), which characterizes the fundamental tradeoff between exploration and exploitation.
We show that RL problems with low GEC form a remarkably rich class, which subsumes low Bellman eluder dimension problems, bilinear class, low witness rank problems, PO-bilinear class, and generalized regular PSR.
arXiv Detail & Related papers (2022-11-03T16:42:40Z) - Multi-Agent Reinforcement Learning for Adaptive Mesh Refinement [17.72127385405445]
We present a novel formulation of adaptive mesh refinement (AMR) as a fully-cooperative Markov game.
We design a novel deep multi-agent reinforcement learning algorithm called Value Decomposition Graph Network (VDGN)
We show that VDGN policies significantly outperform error threshold-based policies in global error and cost metrics.
arXiv Detail & Related papers (2022-11-02T00:41:32Z) - A General Framework for Sample-Efficient Function Approximation in
Reinforcement Learning [132.45959478064736]
We propose a general framework that unifies model-based and model-free reinforcement learning.
We propose a novel estimation function with decomposable structural properties for optimization-based exploration.
Under our framework, a new sample-efficient algorithm namely OPtimization-based ExploRation with Approximation (OPERA) is proposed.
arXiv Detail & Related papers (2022-09-30T17:59:16Z) - Deep Reinforcement Learning for Adaptive Mesh Refinement [0.9281671380673306]
We train policy networks for AMR strategy directly from numerical simulation.
The training process does not require an exact solution or a high-fidelity ground truth to the partial differential equation.
We show that the deep reinforcement learning policies are competitive with common AMRs, generalize well across problem classes, and strike a favorable balance between accuracy and cost.
arXiv Detail & Related papers (2022-09-25T23:45:34Z) - Multi-Objective Policy Gradients with Topological Constraints [108.10241442630289]
We present a new algorithm for a policy gradient in TMDPs by a simple extension of the proximal policy optimization (PPO) algorithm.
We demonstrate this on a real-world multiple-objective navigation problem with an arbitrary ordering of objectives both in simulation and on a real robot.
arXiv Detail & Related papers (2022-09-15T07:22:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.