Reducing operator complexity in Algebraic Multigrid with Machine
Learning Approaches
- URL: http://arxiv.org/abs/2307.07695v1
- Date: Sat, 15 Jul 2023 03:13:40 GMT
- Title: Reducing operator complexity in Algebraic Multigrid with Machine
Learning Approaches
- Authors: Ru Huang, Kai Chang, Huan He, Ruipeng Li, Yuanzhe Xi
- Abstract summary: We propose a data-driven and machine-learning-based approach to compute non-Galerkin coarse-grid operators.
We have developed novel ML algorithms that utilize neural networks (NNs) combined with smooth test vectors from multigrid eigenvalue problems.
- Score: 3.3610422011700187
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We propose a data-driven and machine-learning-based approach to compute
non-Galerkin coarse-grid operators in algebraic multigrid (AMG) methods,
addressing the well-known issue of increasing operator complexity. Guided by
the AMG theory on spectrally equivalent coarse-grid operators, we have
developed novel ML algorithms that utilize neural networks (NNs) combined with
smooth test vectors from multigrid eigenvalue problems. The proposed method
demonstrates promise in reducing the complexity of coarse-grid operators while
maintaining overall AMG convergence for solving parametric partial differential
equation (PDE) problems. Numerical experiments on anisotropic rotated Laplacian
and linear elasticity problems are provided to showcase the performance and
compare with existing methods for computing non-Galerkin coarse-grid operators.
Related papers
- An Efficient Algorithm for Clustered Multi-Task Compressive Sensing [60.70532293880842]
Clustered multi-task compressive sensing is a hierarchical model that solves multiple compressive sensing tasks.
The existing inference algorithm for this model is computationally expensive and does not scale well in high dimensions.
We propose a new algorithm that substantially accelerates model inference by avoiding the need to explicitly compute these covariance matrices.
arXiv Detail & Related papers (2023-09-30T15:57:14Z) - Multi-Grid Tensorized Fourier Neural Operator for High-Resolution PDEs [93.82811501035569]
We introduce a new data efficient and highly parallelizable operator learning approach with reduced memory requirement and better generalization.
MG-TFNO scales to large resolutions by leveraging local and global structures of full-scale, real-world phenomena.
We demonstrate superior performance on the turbulent Navier-Stokes equations where we achieve less than half the error with over 150x compression.
arXiv Detail & Related papers (2023-09-29T20:18:52Z) - A Deep Learning algorithm to accelerate Algebraic Multigrid methods in
Finite Element solvers of 3D elliptic PDEs [0.0]
We introduce a novel Deep Learning algorithm that minimizes the computational cost of the Algebraic multigrid method when used as a finite element solver.
We experimentally prove that the pooling successfully reduces the computational cost of processing a large sparse matrix and preserves the features needed for the regression task at hand.
arXiv Detail & Related papers (2023-04-21T09:18:56Z) - Relational Reasoning via Set Transformers: Provable Efficiency and
Applications to MARL [154.13105285663656]
A cooperative Multi-A gent R einforcement Learning (MARL) with permutation invariant agents framework has achieved tremendous empirical successes in real-world applications.
Unfortunately, the theoretical understanding of this MARL problem is lacking due to the curse of many agents and the limited exploration of the relational reasoning in existing works.
We prove that the suboptimality gaps of the model-free and model-based algorithms are independent of and logarithmic in the number of agents respectively, which mitigates the curse of many agents.
arXiv Detail & Related papers (2022-09-20T16:42:59Z) - Learning Relaxation for Multigrid [1.14219428942199]
We use Neural Networks to learn relaxation parameters for an ensemble of diffusion operators with random coefficients.
We show that learning relaxation parameters on relatively small grids using a two-grid method and Gelfand's formula as a loss function can be implemented easily.
arXiv Detail & Related papers (2022-07-25T12:43:50Z) - Learning optimal multigrid smoothers via neural networks [1.9336815376402723]
We propose an efficient framework for learning optimized smoothers from operator stencils in the form of convolutional neural networks (CNNs)
CNNs are trained on small-scale problems from a given type of PDEs based on a supervised loss function derived from multigrid convergence theories.
Numerical results on anisotropic rotated Laplacian problems demonstrate improved convergence rates and solution time compared with classical hand-crafted relaxation methods.
arXiv Detail & Related papers (2021-02-24T05:02:54Z) - GL-Coarsener: A Graph representation learning framework to construct
coarse grid hierarchy for AMG solvers [0.0]
Algebraic multi-grid (AMG) methods are numerical methods used to solve large linear systems of equations efficiently.
Here we propose an aggregation-based coarsening framework leveraging graph representation learning and clustering algorithms.
Our method introduces the power of machine learning into the AMG research field and opens a new perspective for future researches.
arXiv Detail & Related papers (2020-11-19T17:49:09Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z) - Iterative Algorithm Induced Deep-Unfolding Neural Networks: Precoding
Design for Multiuser MIMO Systems [59.804810122136345]
We propose a framework for deep-unfolding, where a general form of iterative algorithm induced deep-unfolding neural network (IAIDNN) is developed.
An efficient IAIDNN based on the structure of the classic weighted minimum mean-square error (WMMSE) iterative algorithm is developed.
We show that the proposed IAIDNN efficiently achieves the performance of the iterative WMMSE algorithm with reduced computational complexity.
arXiv Detail & Related papers (2020-06-15T02:57:57Z) - Learning Algebraic Multigrid Using Graph Neural Networks [34.32501734380907]
We train a single graph neural network to learn a mapping from an entire class of such matrices to prolongation operators.
Experiments on a broad class of problems demonstrate improved convergence rates compared to classical AMG.
arXiv Detail & Related papers (2020-03-12T12:36:48Z) - Variance Reduction with Sparse Gradients [82.41780420431205]
Variance reduction methods such as SVRG and SpiderBoost use a mixture of large and small batch gradients.
We introduce a new sparsity operator: The random-top-k operator.
Our algorithm consistently outperforms SpiderBoost on various tasks including image classification, natural language processing, and sparse matrix factorization.
arXiv Detail & Related papers (2020-01-27T08:23:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.