A Unified Framework for Optimization-Based Graph Coarsening
- URL: http://arxiv.org/abs/2210.00437v1
- Date: Sun, 2 Oct 2022 06:31:42 GMT
- Title: A Unified Framework for Optimization-Based Graph Coarsening
- Authors: Manoj Kumar, Anurag Sharma, Sandeep Kumar
- Abstract summary: Given a large graph, graph coarsening aims to learn a smaller-tractable graph while preserving the properties of the originally given graph.
The proposed framework lies in the unification of graph learning and dimensionality reduction.
It is established that the learned coarsened graph is $epsin(0,1)$ similar to the original graph.
- Score: 5.720402020129441
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Graph coarsening is a widely used dimensionality reduction technique for
approaching large-scale graph machine learning problems. Given a large graph,
graph coarsening aims to learn a smaller-tractable graph while preserving the
properties of the originally given graph. Graph data consist of node features
and graph matrix (e.g., adjacency and Laplacian). The existing graph coarsening
methods ignore the node features and rely solely on a graph matrix to simplify
graphs. In this paper, we introduce a novel optimization-based framework for
graph dimensionality reduction. The proposed framework lies in the unification
of graph learning and dimensionality reduction. It takes both the graph matrix
and the node features as the input and learns the coarsen graph matrix and the
coarsen feature matrix jointly while ensuring desired properties. The proposed
optimization formulation is a multi-block non-convex optimization problem,
which is solved efficiently by leveraging block majorization-minimization,
$\log$ determinant, Dirichlet energy, and regularization frameworks. The
proposed algorithms are provably convergent and practically amenable to
numerous tasks. It is also established that the learned coarsened graph is
$\epsilon\in(0,1)$ similar to the original graph. Extensive experiments
elucidate the efficacy of the proposed framework for real-world applications.
Related papers
- The Graph Lottery Ticket Hypothesis: Finding Sparse, Informative Graph
Structure [18.00833762891405]
Graph Lottery Ticket (GLT) Hypothesis: There is an extremely sparse backbone for every graph.
We study 8 key metrics of interest that directly influence the performance of graph learning algorithms.
We propose a straightforward and efficient algorithm for finding these GLTs in arbitrary graphs.
arXiv Detail & Related papers (2023-12-08T00:24:44Z) - Structure-free Graph Condensation: From Large-scale Graphs to Condensed
Graph-free Data [91.27527985415007]
Existing graph condensation methods rely on the joint optimization of nodes and structures in the condensed graph.
We advocate a new Structure-Free Graph Condensation paradigm, named SFGC, to distill a large-scale graph into a small-scale graph node set.
arXiv Detail & Related papers (2023-06-05T07:53:52Z) - Hyperparameter-free and Explainable Whole Graph Embedding [16.03671347701557]
Graph representation learning attempts to learn a lower-dimensional representation vector for each node or the whole graph.
This paper proposes a new whole graph embedding method, combining the DHC (Degree, H-index and Coreness) theorem and Shannon Entropy (E)
The proposed approach has a good performance in lower-dimensional graph visualization.
arXiv Detail & Related papers (2021-08-04T15:30:52Z) - Optimization-Based Algebraic Multigrid Coarsening Using Reinforcement
Learning [0.0]
A system of linear equations defines a graph on the set of unknowns.
Each level of a multigrid solver requires the selection of an appropriate coarse graph along with restriction and operators that map to and from the coarse representation.
It has been demonstrated that it is possible to directly learn the AMG and restriction operators, given a coarse graph selection.
We propose a sparse method using a reinforcement learning agent based on graph neural networks (GNNs)
arXiv Detail & Related papers (2021-06-03T13:57:32Z) - Multilayer Graph Clustering with Optimized Node Embedding [70.1053472751897]
multilayer graph clustering aims at dividing the graph nodes into categories or communities.
We propose a clustering-friendly embedding of the layers of a given multilayer graph.
Experiments show that our method leads to a significant improvement.
arXiv Detail & Related papers (2021-03-30T17:36:40Z) - Graph Coarsening with Neural Networks [8.407217618651536]
We propose a framework for measuring the quality of coarsening algorithm and show that depending on the goal, we need to carefully choose the Laplace operator on the coarse graph.
Motivated by the observation that the current choice of edge weight for the coarse graph may be sub-optimal, we parametrize the weight assignment map with graph neural networks and train it to improve the coarsening quality in an unsupervised way.
arXiv Detail & Related papers (2021-02-02T06:50:07Z) - Product Graph Learning from Multi-domain Data with Sparsity and Rank
Constraints [17.15829643665034]
We propose an efficient iterative solver for learning sparse product graphs from data.
We extend this solver to infer multi-component graph factors with applications to product graph clustering.
The efficacy of the developed framework is demonstrated using several numerical experiments on synthetic data and real data.
arXiv Detail & Related papers (2020-12-15T04:59:32Z) - Multilayer Clustered Graph Learning [66.94201299553336]
We use contrastive loss as a data fidelity term, in order to properly aggregate the observed layers into a representative graph.
Experiments show that our method leads to a clustered clusters w.r.t.
We learn a clustering algorithm for solving clustering problems.
arXiv Detail & Related papers (2020-10-29T09:58:02Z) - Graph Pooling with Node Proximity for Hierarchical Representation
Learning [80.62181998314547]
We propose a novel graph pooling strategy that leverages node proximity to improve the hierarchical representation learning of graph data with their multi-hop topology.
Results show that the proposed graph pooling strategy is able to achieve state-of-the-art performance on a collection of public graph classification benchmark datasets.
arXiv Detail & Related papers (2020-06-19T13:09:44Z) - Wasserstein-based Graph Alignment [56.84964475441094]
We cast a new formulation for the one-to-many graph alignment problem, which aims at matching a node in the smaller graph with one or more nodes in the larger graph.
We show that our method leads to significant improvements with respect to the state-of-the-art algorithms for each of these tasks.
arXiv Detail & Related papers (2020-03-12T22:31:59Z) - Unsupervised Graph Embedding via Adaptive Graph Learning [85.28555417981063]
Graph autoencoders (GAEs) are powerful tools in representation learning for graph embedding.
In this paper, two novel unsupervised graph embedding methods, unsupervised graph embedding via adaptive graph learning (BAGE) and unsupervised graph embedding via variational adaptive graph learning (VBAGE) are proposed.
Experimental studies on several datasets validate our design and demonstrate that our methods outperform baselines by a wide margin in node clustering, node classification, and graph visualization tasks.
arXiv Detail & Related papers (2020-03-10T02:33:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.