Graph Pooling with Maximum-Weight $k$-Independent Sets
- URL: http://arxiv.org/abs/2208.03523v1
- Date: Sat, 6 Aug 2022 14:12:47 GMT
- Title: Graph Pooling with Maximum-Weight $k$-Independent Sets
- Authors: Davide Bacciu, Alessio Conte, Francesco Landolfi
- Abstract summary: We introduce a graph coarsening mechanism based on the graph-theoretic concept of maximum-weight $k$-independent sets.
We prove theoretical guarantees for distortion bounds on path lengths, as well as the ability to preserve key topological properties in the coarsened graphs.
- Score: 12.251091325930837
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph reductions are fundamental when dealing with large scale networks and
relational data. They allow to downsize tasks of high computational impact by
solving them in coarsened structures. At the same time, graph reductions play
the role of pooling layers in graph neural networks, to extract
multi-resolution representations from structures. In these contexts, the
ability of the reduction mechanism to preserve distance relationships and
topological properties appears fundamental, along with a scalability enabling
its application to real-world sized problems. In this paper, we introduce a
graph coarsening mechanism based on the graph-theoretic concept of
maximum-weight $k$-independent sets, providing a greedy algorithm that allows
efficient parallel implementation on GPUs. Our method is the first
graph-structured counterpart of controllable equispaced coarsening mechanisms
in regular data (images, sequences). We prove theoretical guarantees for
distortion bounds on path lengths, as well as the ability to preserve key
topological properties in the coarsened graphs. We leverage these concepts to
define a graph pooling mechanism that we empirically assess in graph
classification tasks, showing that it compares favorably against pooling
methods in literature.
Related papers
- Graph Size-imbalanced Learning with Energy-guided Structural Smoothing [13.636616140250908]
Real-world graphs usually suffer from the size-imbalanced problem in the multi-graph classification.
Recent studies find that off-the-shelf Graph Neural Networks (GNNs) would compromise model performance under the long-tailed settings.
We propose a novel energy-based size-imbalanced learning framework named textbfSIMBA, which smooths the features between head and tail graphs.
arXiv Detail & Related papers (2024-12-23T14:06:49Z) - GraphCroc: Cross-Correlation Autoencoder for Graph Structural Reconstruction [6.817416560637197]
Graph autoencoders (GAEs) reconstruct graph structures from node embeddings.
We introduce a cross-correlation mechanism that significantly enhances the GAE representational capabilities.
We also propose GraphCroc, a new GAE that supports flexible encoder architectures tailored for various downstream tasks.
arXiv Detail & Related papers (2024-10-04T12:59:45Z) - Deep Manifold Graph Auto-Encoder for Attributed Graph Embedding [51.75091298017941]
This paper proposes a novel Deep Manifold (Variational) Graph Auto-Encoder (DMVGAE/DMGAE) for attributed graph data.
The proposed method surpasses state-of-the-art baseline algorithms by a significant margin on different downstream tasks across popular datasets.
arXiv Detail & Related papers (2024-01-12T17:57:07Z) - SynGraphy: Succinct Summarisation of Large Networks via Small Synthetic
Representative Graphs [4.550112751061436]
We describe SynGraphy, a method for visually summarising the structure of large network datasets.
It works by drawing smaller graphs generated to have similar structural properties to the input graphs.
arXiv Detail & Related papers (2023-02-15T16:00:15Z) - Graph Kernel Neural Networks [53.91024360329517]
We propose to use graph kernels, i.e. kernel functions that compute an inner product on graphs, to extend the standard convolution operator to the graph domain.
This allows us to define an entirely structural model that does not require computing the embedding of the input graph.
Our architecture allows to plug-in any type of graph kernels and has the added benefit of providing some interpretability.
arXiv Detail & Related papers (2021-12-14T14:48:08Z) - Edge but not Least: Cross-View Graph Pooling [76.71497833616024]
This paper presents a cross-view graph pooling (Co-Pooling) method to better exploit crucial graph structure information.
Through cross-view interaction, edge-view pooling and node-view pooling seamlessly reinforce each other to learn more informative graph-level representations.
arXiv Detail & Related papers (2021-09-24T08:01:23Z) - Accurate Learning of Graph Representations with Graph Multiset Pooling [45.72542969364438]
We propose a Graph Multiset Transformer (GMT) that captures the interaction between nodes according to their structural dependencies.
Our experimental results show that GMT significantly outperforms state-of-the-art graph pooling methods on graph classification benchmarks.
arXiv Detail & Related papers (2021-02-23T07:45:58Z) - Multilayer Clustered Graph Learning [66.94201299553336]
We use contrastive loss as a data fidelity term, in order to properly aggregate the observed layers into a representative graph.
Experiments show that our method leads to a clustered clusters w.r.t.
We learn a clustering algorithm for solving clustering problems.
arXiv Detail & Related papers (2020-10-29T09:58:02Z) - Graph Pooling with Node Proximity for Hierarchical Representation
Learning [80.62181998314547]
We propose a novel graph pooling strategy that leverages node proximity to improve the hierarchical representation learning of graph data with their multi-hop topology.
Results show that the proposed graph pooling strategy is able to achieve state-of-the-art performance on a collection of public graph classification benchmark datasets.
arXiv Detail & Related papers (2020-06-19T13:09:44Z) - Block-Approximated Exponential Random Graphs [77.4792558024487]
An important challenge in the field of exponential random graphs (ERGs) is the fitting of non-trivial ERGs on large graphs.
We propose an approximative framework to such non-trivial ERGs that result in dyadic independence (i.e., edge independent) distributions.
Our methods are scalable to sparse graphs consisting of millions of nodes.
arXiv Detail & Related papers (2020-02-14T11:42:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.