Graph coarsening: From scientific computing to machine learning
- URL: http://arxiv.org/abs/2106.11863v1
- Date: Tue, 22 Jun 2021 15:31:50 GMT
- Title: Graph coarsening: From scientific computing to machine learning
- Authors: Jie Chen, Yousef Saad and Zechen Zhang
- Abstract summary: The goal of this paper is to take a broad look into coarsening techniques that have been successfully deployed in scientific computing.
In machine learning, graph coarsening goes under various names, e.g., graph downsampling or graph reduction.
As will be seen, a common strategy in these methods is to rely on spectral properties to define the coarse graph.
- Score: 11.728753892489776
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The general method of graph coarsening or graph reduction has been a
remarkably useful and ubiquitous tool in scientific computing and it is now
just starting to have a similar impact in machine learning. The goal of this
paper is to take a broad look into coarsening techniques that have been
successfully deployed in scientific computing and see how similar principles
are finding their way in more recent applications related to machine learning.
In scientific computing, coarsening plays a central role in algebraic multigrid
methods as well as the related class of multilevel incomplete LU
factorizations. In machine learning, graph coarsening goes under various names,
e.g., graph downsampling or graph reduction. Its goal in most cases is to
replace some original graph by one which has fewer nodes, but whose structure
and characteristics are similar to those of the original graph. As will be
seen, a common strategy in these methods is to rely on spectral properties to
define the coarse graph.
Related papers
- From Cluster Assumption to Graph Convolution: Graph-based Semi-Supervised Learning Revisited [51.24526202984846]
Graph-based semi-supervised learning (GSSL) has long been a hot research topic.
graph convolutional networks (GCNs) have become the predominant techniques for their promising performance.
arXiv Detail & Related papers (2023-09-24T10:10:21Z) - A Gromov--Wasserstein Geometric View of Spectrum-Preserving Graph
Coarsening [19.09507367362567]
This work studies graph coarsening from a different perspective, developing a theory for preserving graph distances.
The geometric approach is useful when working with a collection of graphs, such as in graph classification and regression.
Minimizing this difference can be done using the popular weighted kernel $K$-means method.
arXiv Detail & Related papers (2023-06-15T04:47:26Z) - Spectral Augmentations for Graph Contrastive Learning [50.149996923976836]
Contrastive learning has emerged as a premier method for learning representations with or without supervision.
Recent studies have shown its utility in graph representation learning for pre-training.
We propose a set of well-motivated graph transformation operations to provide a bank of candidates when constructing augmentations for a graph contrastive objective.
arXiv Detail & Related papers (2023-02-06T16:26:29Z) - State of the Art and Potentialities of Graph-level Learning [54.68482109186052]
Graph-level learning has been applied to many tasks including comparison, regression, classification, and more.
Traditional approaches to learning a set of graphs rely on hand-crafted features, such as substructures.
Deep learning has helped graph-level learning adapt to the growing scale of graphs by extracting features automatically and encoding graphs into low-dimensional representations.
arXiv Detail & Related papers (2023-01-14T09:15:49Z) - A Unified Framework for Optimization-Based Graph Coarsening [5.720402020129441]
Given a large graph, graph coarsening aims to learn a smaller-tractable graph while preserving the properties of the originally given graph.
The proposed framework lies in the unification of graph learning and dimensionality reduction.
It is established that the learned coarsened graph is $epsin(0,1)$ similar to the original graph.
arXiv Detail & Related papers (2022-10-02T06:31:42Z) - Learning node embeddings via summary graphs: a brief theoretical
analysis [55.25628709267215]
Graph representation learning plays an important role in many graph mining applications, but learning embeddings of large-scale graphs remains a problem.
Recent works try to improve scalability via graph summarization -- i.e., they learn embeddings on a smaller summary graph, and then restore the node embeddings of the original graph.
We give an in-depth theoretical analysis of three specific embedding learning methods based on introduced kernel matrix.
arXiv Detail & Related papers (2022-07-04T04:09:50Z) - Malware Analysis with Symbolic Execution and Graph Kernel [2.1377923666134113]
We propose a new efficient open source toolchain for machine learning-based classification.
We focus on the 1-dimensional Weisfeiler-Lehman kernel, which can capture local similarities between graphs.
arXiv Detail & Related papers (2022-04-12T08:52:33Z) - Synthetic Graph Generation to Benchmark Graph Learning [7.914804101579097]
Graph learning algorithms have attained state-of-the-art performance on many graph analysis tasks.
One reason is due to the very small number of datasets used in practice to benchmark the performance of graph learning algorithms.
We propose to generate synthetic graphs, and study the behaviour of graph learning algorithms in a controlled scenario.
arXiv Detail & Related papers (2022-04-04T10:48:32Z) - Scaling R-GCN Training with Graph Summarization [71.06855946732296]
Training of Relation Graph Convolutional Networks (R-GCN) does not scale well with the size of the graph.
In this work, we experiment with the use of graph summarization techniques to compress the graph.
We obtain reasonable results on the AIFB, MUTAG and AM datasets.
arXiv Detail & Related papers (2022-03-05T00:28:43Z) - Bringing Your Own View: Graph Contrastive Learning without Prefabricated
Data Augmentations [94.41860307845812]
Self-supervision is recently surging at its new frontier of graph learning.
GraphCL uses a prefabricated prior reflected by the ad-hoc manual selection of graph data augmentations.
We have extended the prefabricated discrete prior in the augmentation set, to a learnable continuous prior in the parameter space of graph generators.
We have leveraged both principles of information minimization (InfoMin) and information bottleneck (InfoBN) to regularize the learned priors.
arXiv Detail & Related papers (2022-01-04T15:49:18Z) - Graph Coarsening with Neural Networks [8.407217618651536]
We propose a framework for measuring the quality of coarsening algorithm and show that depending on the goal, we need to carefully choose the Laplace operator on the coarse graph.
Motivated by the observation that the current choice of edge weight for the coarse graph may be sub-optimal, we parametrize the weight assignment map with graph neural networks and train it to improve the coarsening quality in an unsupervised way.
arXiv Detail & Related papers (2021-02-02T06:50:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.