A reinforcement learning strategy to automate and accelerate h/p-multigrid solvers
- URL: http://arxiv.org/abs/2407.15872v1
- Date: Thu, 18 Jul 2024 21:26:28 GMT
- Title: A reinforcement learning strategy to automate and accelerate h/p-multigrid solvers
- Authors: David Huergo, Laura Alonso, Saumitra Joshi, Adrian Juanicoteca, Gonzalo Rubio, Esteban Ferrer,
- Abstract summary: Multigrid methods are very efficient but require fine-tuning of numerical parameters, such as the number of smoothing sweeps per level.
This paper is to use a proximal policy optimization algorithm to automatically tune the multigrid parameters.
Our findings reveal that the proposed reinforcement learning h/p-multigrid approach significantly accelerates and improves the robustness of steady-state simulations.
- Score: 0.37109226820205005
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We explore a reinforcement learning strategy to automate and accelerate h/p-multigrid methods in high-order solvers. Multigrid methods are very efficient but require fine-tuning of numerical parameters, such as the number of smoothing sweeps per level and the correction fraction (i.e., proportion of the corrected solution that is transferred from a coarser grid to a finer grid). The objective of this paper is to use a proximal policy optimization algorithm to automatically tune the multigrid parameters and, by doing so, improve stability and efficiency of the h/p-multigrid strategy. Our findings reveal that the proposed reinforcement learning h/p-multigrid approach significantly accelerates and improves the robustness of steady-state simulations for one dimensional advection-diffusion and nonlinear Burgers' equations, when discretized using high-order h/p methods, on uniform and nonuniform grids.
Related papers
- Automating the Design of Multigrid Methods with Evolutionary Program
Synthesis [0.0]
In many cases, the design of an efficient or at least working multigrid solver is an open problem.
This thesis demonstrates that grammar-guided genetic programming can discover multigrid methods of unprecedented structure.
We present our implementation in the form of the Python framework EvoStencils, which is freely available as open-source software.
arXiv Detail & Related papers (2023-12-22T17:55:48Z) - Optimizing Solution-Samplers for Combinatorial Problems: The Landscape
of Policy-Gradient Methods [52.0617030129699]
We introduce a novel theoretical framework for analyzing the effectiveness of DeepMatching Networks and Reinforcement Learning methods.
Our main contribution holds for a broad class of problems including Max-and Min-Cut, Max-$k$-Bipartite-Bi, Maximum-Weight-Bipartite-Bi, and Traveling Salesman Problem.
As a byproduct of our analysis we introduce a novel regularization process over vanilla descent and provide theoretical and experimental evidence that it helps address vanishing-gradient issues and escape bad stationary points.
arXiv Detail & Related papers (2023-10-08T23:39:38Z) - Accelerating Cutting-Plane Algorithms via Reinforcement Learning
Surrogates [49.84541884653309]
A current standard approach to solving convex discrete optimization problems is the use of cutting-plane algorithms.
Despite the existence of a number of general-purpose cut-generating algorithms, large-scale discrete optimization problems continue to suffer from intractability.
We propose a method for accelerating cutting-plane algorithms via reinforcement learning.
arXiv Detail & Related papers (2023-07-17T20:11:56Z) - Dynamic Voxel Grid Optimization for High-Fidelity RGB-D Supervised
Surface Reconstruction [130.84162691963536]
We introduce a novel dynamic grid optimization method for high-fidelity 3D surface reconstruction.
We optimize the process by dynamically modifying the grid and assigning more finer-scale voxels to regions with higher complexity.
The proposed approach is able to generate high-quality 3D reconstructions with fine details on both synthetic and real-world data.
arXiv Detail & Related papers (2023-04-12T22:39:57Z) - Learning Relaxation for Multigrid [1.14219428942199]
We use Neural Networks to learn relaxation parameters for an ensemble of diffusion operators with random coefficients.
We show that learning relaxation parameters on relatively small grids using a two-grid method and Gelfand's formula as a loss function can be implemented easily.
arXiv Detail & Related papers (2022-07-25T12:43:50Z) - Efficient single-grid and multi-grid solvers for real-space orbital-free
density functional theory [5.623232537411766]
This work develops a new single-grid solver to improve the computational efficiencies of the real-space orbital-free density functional theory.
Numerical examples show that the proposed single-grid solver can improve the computational efficiencies by two orders of magnitude.
arXiv Detail & Related papers (2022-05-03T13:19:18Z) - AdaGrid: Adaptive Grid Search for Link Prediction Training Objective [58.79804082133998]
Training objective crucially influences the model's performance and generalization capabilities.
We propose Adaptive Grid Search (AdaGrid) which dynamically adjusts the edge message ratio during training.
We show that AdaGrid can boost the performance of the models up to $1.9%$ while being nine times more time-efficient than a complete search.
arXiv Detail & Related papers (2022-03-30T09:24:17Z) - High-Dimensional Sparse Bayesian Learning without Covariance Matrices [66.60078365202867]
We introduce a new inference scheme that avoids explicit construction of the covariance matrix.
Our approach couples a little-known diagonal estimation result from numerical linear algebra with the conjugate gradient algorithm.
On several simulations, our method scales better than existing approaches in computation time and memory.
arXiv Detail & Related papers (2022-02-25T16:35:26Z) - Enhancing Column Generation by a Machine-Learning-Based Pricing
Heuristic for Graph Coloring [5.278929511653199]
Column Generation (CG) is an effective method for solving large-scale optimization problems.
We propose a Machine-Learning Pricing Heuristic that can generate many high-quality columns efficiently.
arXiv Detail & Related papers (2021-12-08T03:58:25Z) - Learning optimal multigrid smoothers via neural networks [1.9336815376402723]
We propose an efficient framework for learning optimized smoothers from operator stencils in the form of convolutional neural networks (CNNs)
CNNs are trained on small-scale problems from a given type of PDEs based on a supervised loss function derived from multigrid convergence theories.
Numerical results on anisotropic rotated Laplacian problems demonstrate improved convergence rates and solution time compared with classical hand-crafted relaxation methods.
arXiv Detail & Related papers (2021-02-24T05:02:54Z) - GACEM: Generalized Autoregressive Cross Entropy Method for Multi-Modal
Black Box Constraint Satisfaction [69.94831587339539]
We present a modified Cross-Entropy Method (CEM) that uses a masked auto-regressive neural network for modeling uniform distributions over the solution space.
Our algorithm is able to express complicated solution spaces, thus allowing it to track a variety of different solution regions.
arXiv Detail & Related papers (2020-02-17T20:21:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.