Learning Relaxation for Multigrid
- URL: http://arxiv.org/abs/2207.11255v1
- Date: Mon, 25 Jul 2022 12:43:50 GMT
- Title: Learning Relaxation for Multigrid
- Authors: Dmitry Kuznichov
- Abstract summary: We use Neural Networks to learn relaxation parameters for an ensemble of diffusion operators with random coefficients.
We show that learning relaxation parameters on relatively small grids using a two-grid method and Gelfand's formula as a loss function can be implemented easily.
- Score: 1.14219428942199
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: During the last decade, Neural Networks (NNs) have proved to be extremely
effective tools in many fields of engineering, including autonomous vehicles,
medical diagnosis and search engines, and even in art creation. Indeed, NNs
often decisively outperform traditional algorithms. One area that is only
recently attracting significant interest is using NNs for designing numerical
solvers, particularly for discretized partial differential equations. Several
recent papers have considered employing NNs for developing multigrid methods,
which are a leading computational tool for solving discretized partial
differential equations and other sparse-matrix problems. We extend these new
ideas, focusing on so-called relaxation operators (also called smoothers),
which are an important component of the multigrid algorithm that has not yet
received much attention in this context. We explore an approach for using NNs
to learn relaxation parameters for an ensemble of diffusion operators with
random coefficients, for Jacobi type smoothers and for 4Color GaussSeidel
smoothers. The latter yield exceptionally efficient and easy to parallelize
Successive Over Relaxation (SOR) smoothers. Moreover, this work demonstrates
that learning relaxation parameters on relatively small grids using a two-grid
method and Gelfand's formula as a loss function can be implemented easily.
These methods efficiently produce nearly-optimal parameters, thereby
significantly improving the convergence rate of multigrid algorithms on large
grids.
Related papers
- Dynamically configured physics-informed neural network in topology
optimization applications [4.403140515138818]
The physics-informed neural network (PINN) can avoid generating enormous amounts of data when solving forward problems.
A dynamically configured PINN-based topology optimization (DCPINN-TO) method is proposed.
The accuracy of the displacement prediction and optimization results indicate that the DCPINN-TO method is effective and efficient.
arXiv Detail & Related papers (2023-12-12T05:35:30Z) - Reducing operator complexity in Algebraic Multigrid with Machine
Learning Approaches [3.3610422011700187]
We propose a data-driven and machine-learning-based approach to compute non-Galerkin coarse-grid operators.
We have developed novel ML algorithms that utilize neural networks (NNs) combined with smooth test vectors from multigrid eigenvalue problems.
arXiv Detail & Related papers (2023-07-15T03:13:40Z) - A Deep Learning algorithm to accelerate Algebraic Multigrid methods in
Finite Element solvers of 3D elliptic PDEs [0.0]
We introduce a novel Deep Learning algorithm that minimizes the computational cost of the Algebraic multigrid method when used as a finite element solver.
We experimentally prove that the pooling successfully reduces the computational cost of processing a large sparse matrix and preserves the features needed for the regression task at hand.
arXiv Detail & Related papers (2023-04-21T09:18:56Z) - Symmetric Tensor Networks for Generative Modeling and Constrained
Combinatorial Optimization [72.41480594026815]
Constrained optimization problems abound in industry, from portfolio optimization to logistics.
One of the major roadblocks in solving these problems is the presence of non-trivial hard constraints which limit the valid search space.
In this work, we encode arbitrary integer-valued equality constraints of the form Ax=b, directly into U(1) symmetric networks (TNs) and leverage their applicability as quantum-inspired generative models.
arXiv Detail & Related papers (2022-11-16T18:59:54Z) - Zonotope Domains for Lagrangian Neural Network Verification [102.13346781220383]
We decompose the problem of verifying a deep neural network into the verification of many 2-layer neural networks.
Our technique yields bounds that improve upon both linear programming and Lagrangian-based verification techniques.
arXiv Detail & Related papers (2022-10-14T19:31:39Z) - Optimization on manifolds: A symplectic approach [127.54402681305629]
We propose a dissipative extension of Dirac's theory of constrained Hamiltonian systems as a general framework for solving optimization problems.
Our class of (accelerated) algorithms are not only simple and efficient but also applicable to a broad range of contexts.
arXiv Detail & Related papers (2021-07-23T13:43:34Z) - DeepSplit: Scalable Verification of Deep Neural Networks via Operator
Splitting [70.62923754433461]
Analyzing the worst-case performance of deep neural networks against input perturbations amounts to solving a large-scale non- optimization problem.
We propose a novel method that can directly solve a convex relaxation of the problem to high accuracy, by splitting it into smaller subproblems that often have analytical solutions.
arXiv Detail & Related papers (2021-06-16T20:43:49Z) - Learning optimal multigrid smoothers via neural networks [1.9336815376402723]
We propose an efficient framework for learning optimized smoothers from operator stencils in the form of convolutional neural networks (CNNs)
CNNs are trained on small-scale problems from a given type of PDEs based on a supervised loss function derived from multigrid convergence theories.
Numerical results on anisotropic rotated Laplacian problems demonstrate improved convergence rates and solution time compared with classical hand-crafted relaxation methods.
arXiv Detail & Related papers (2021-02-24T05:02:54Z) - Combinatorial optimization and reasoning with graph neural networks [7.8107109904672365]
Combinatorial optimization is a well-established area in operations research and computer science.
Recent years have seen a surge of interest in using machine learning, especially graph neural networks (GNNs) as a key building block for tasks.
arXiv Detail & Related papers (2021-02-18T18:47:20Z) - Scaling the Convex Barrier with Sparse Dual Algorithms [141.4085318878354]
We present two novel dual algorithms for tight and efficient neural network bounding.
Both methods recover the strengths of the new relaxation: tightness and a linear separation oracle.
We can obtain better bounds than off-the-shelf solvers in only a fraction of their running time.
arXiv Detail & Related papers (2021-01-14T19:45:17Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.