Optimization-Based Algebraic Multigrid Coarsening Using Reinforcement
Learning
- URL: http://arxiv.org/abs/2106.01854v1
- Date: Thu, 3 Jun 2021 13:57:32 GMT
- Title: Optimization-Based Algebraic Multigrid Coarsening Using Reinforcement
Learning
- Authors: Ali Taghibakhshi, Scott MacLachlan, Luke Olson, Matthew West
- Abstract summary: A system of linear equations defines a graph on the set of unknowns.
Each level of a multigrid solver requires the selection of an appropriate coarse graph along with restriction and operators that map to and from the coarse representation.
It has been demonstrated that it is possible to directly learn the AMG and restriction operators, given a coarse graph selection.
We propose a sparse method using a reinforcement learning agent based on graph neural networks (GNNs)
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large sparse linear systems of equations are ubiquitous in science and
engineering, such as those arising from discretizations of partial differential
equations. Algebraic multigrid (AMG) methods are one of the most common methods
of solving such linear systems, with an extensive body of underlying
mathematical theory. A system of linear equations defines a graph on the set of
unknowns and each level of a multigrid solver requires the selection of an
appropriate coarse graph along with restriction and interpolation operators
that map to and from the coarse representation. The efficiency of the multigrid
solver depends critically on this selection and many selection methods have
been developed over the years. Recently, it has been demonstrated that it is
possible to directly learn the AMG interpolation and restriction operators,
given a coarse graph selection. In this paper, we consider the complementary
problem of learning to coarsen graphs for a multigrid solver. We propose a
method using a reinforcement learning (RL) agent based on graph neural networks
(GNNs), which can learn to perform graph coarsening on small training graphs
and then be applied to unstructured large graphs. We demonstrate that this
method can produce better coarse graphs than existing algorithms, even as the
graph size increases and other properties of the graph are varied. We also
propose an efficient inference procedure for performing graph coarsening that
results in linear time complexity in graph size.
Related papers
- Polynomial Graphical Lasso: Learning Edges from Gaussian Graph-Stationary Signals [18.45931641798935]
This paper introduces Polynomial Graphical Lasso (PGL), a new approach to learning graph structures from nodal signals.
Our key contribution lies in the signals as Gaussian and stationary on the graph, enabling the development of a graph-learning lasso.
arXiv Detail & Related papers (2024-04-03T10:19:53Z) - HeteroMILE: a Multi-Level Graph Representation Learning Framework for Heterogeneous Graphs [13.01983932286923]
We propose a Multi-Level Embedding framework of nodes on a heterogeneous graph (HeteroMILE)
HeteroMILE repeatedly coarsens the large sized graph into a smaller size while preserving the backbone structure of the graph before embedding it.
It then refines the coarsened embedding to the original graph using a heterogeneous graph convolution neural network.
arXiv Detail & Related papers (2024-03-31T22:22:10Z) - NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - Training Graph Neural Networks on Growing Stochastic Graphs [114.75710379125412]
Graph Neural Networks (GNNs) rely on graph convolutions to exploit meaningful patterns in networked data.
We propose to learn GNNs on very large graphs by leveraging the limit object of a sequence of growing graphs, the graphon.
arXiv Detail & Related papers (2022-10-27T16:00:45Z) - A Unified Framework for Optimization-Based Graph Coarsening [5.720402020129441]
Given a large graph, graph coarsening aims to learn a smaller-tractable graph while preserving the properties of the originally given graph.
The proposed framework lies in the unification of graph learning and dimensionality reduction.
It is established that the learned coarsened graph is $epsin(0,1)$ similar to the original graph.
arXiv Detail & Related papers (2022-10-02T06:31:42Z) - Learning Graphon Autoencoders for Generative Graph Modeling [91.32624399902755]
Graphon is a nonparametric model that generates graphs with arbitrary sizes and can be induced from graphs easily.
We propose a novel framework called textitgraphon autoencoder to build an interpretable and scalable graph generative model.
A linear graphon factorization model works as a decoder, leveraging the latent representations to reconstruct the induced graphons.
arXiv Detail & Related papers (2021-05-29T08:11:40Z) - GL-Coarsener: A Graph representation learning framework to construct
coarse grid hierarchy for AMG solvers [0.0]
Algebraic multi-grid (AMG) methods are numerical methods used to solve large linear systems of equations efficiently.
Here we propose an aggregation-based coarsening framework leveraging graph representation learning and clustering algorithms.
Our method introduces the power of machine learning into the AMG research field and opens a new perspective for future researches.
arXiv Detail & Related papers (2020-11-19T17:49:09Z) - Multilayer Clustered Graph Learning [66.94201299553336]
We use contrastive loss as a data fidelity term, in order to properly aggregate the observed layers into a representative graph.
Experiments show that our method leads to a clustered clusters w.r.t.
We learn a clustering algorithm for solving clustering problems.
arXiv Detail & Related papers (2020-10-29T09:58:02Z) - Towards Scale-Invariant Graph-related Problem Solving by Iterative
Homogeneous Graph Neural Networks [39.370875358317946]
Current graph neural networks (GNNs) lack generalizability with respect to scales (graph sizes, graph diameters, edge weights, etc.) when solving many graph analysis problems.
We propose several extensions to address the issue. First, inspired by the dependency of the number of iteration of common graph theory algorithms on graph size, we learn to terminate the message passing process in GNNs adaptively according to the progress.
Second, inspired by the fact that many graph theory algorithms are homogeneous with respect to graph weights, we introduce homogeneous transformation layers that are universal homogeneous function approximators, to convert ordinary G
arXiv Detail & Related papers (2020-10-26T12:57:28Z) - Iterative Deep Graph Learning for Graph Neural Networks: Better and
Robust Node Embeddings [53.58077686470096]
We propose an end-to-end graph learning framework, namely Iterative Deep Graph Learning (IDGL) for jointly and iteratively learning graph structure and graph embedding.
Our experiments show that our proposed IDGL models can consistently outperform or match the state-of-the-art baselines.
arXiv Detail & Related papers (2020-06-21T19:49:15Z) - Block-Approximated Exponential Random Graphs [77.4792558024487]
An important challenge in the field of exponential random graphs (ERGs) is the fitting of non-trivial ERGs on large graphs.
We propose an approximative framework to such non-trivial ERGs that result in dyadic independence (i.e., edge independent) distributions.
Our methods are scalable to sparse graphs consisting of millions of nodes.
arXiv Detail & Related papers (2020-02-14T11:42:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.