Maximum Entropy Weighted Independent Set Pooling for Graph Neural
Networks
- URL: http://arxiv.org/abs/2107.01410v1
- Date: Sat, 3 Jul 2021 11:19:28 GMT
- Title: Maximum Entropy Weighted Independent Set Pooling for Graph Neural
Networks
- Authors: Amirhossein Nouranizadeh, Mohammadjavad Matinkia, Mohammad Rahmati,
Reza Safabakhsh
- Abstract summary: We propose a novel pooling layer for graph neural networks based on maximizing the mutual information between the pooled graph and the input graph.
We show that the input graph to the pooling layer can be viewed as a representation of a noisy communication channel.
We show that reaching the maximum mutual information is equivalent to finding a maximum weight independent set of the graph.
- Score: 7.398195748292981
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose a novel pooling layer for graph neural networks
based on maximizing the mutual information between the pooled graph and the
input graph. Since the maximum mutual information is difficult to compute, we
employ the Shannon capacity of a graph as an inductive bias to our pooling
method. More precisely, we show that the input graph to the pooling layer can
be viewed as a representation of a noisy communication channel. For such a
channel, sending the symbols belonging to an independent set of the graph
yields a reliable and error-free transmission of information. We show that
reaching the maximum mutual information is equivalent to finding a maximum
weight independent set of the graph where the weights convey entropy contents.
Through this communication theoretic standpoint, we provide a distinct
perspective for posing the problem of graph pooling as maximizing the
information transmission rate across a noisy communication channel, implemented
by a graph neural network. We evaluate our method, referred to as Maximum
Entropy Weighted Independent Set Pooling (MEWISPool), on graph classification
tasks and the combinatorial optimization problem of the maximum independent
set. Empirical results demonstrate that our method achieves the
state-of-the-art and competitive results on graph classification tasks and the
maximum independent set problem in several benchmark datasets.
Related papers
- Promise and Limitations of Supervised Optimal Transport-Based Graph
Summarization via Information Theoretic Measures [3.179831861897336]
Graph summarization is the problem of producing smaller graph representations of an input graph dataset.
We consider the problem of supervised graph summarization, wherein by using information theoretic measures we seek to preserve relevant information about a class label.
We propose a summarization method that incorporates mutual information estimates between random variables associated with sample graphs and class labels into the optimal transport compression framework.
arXiv Detail & Related papers (2023-05-11T21:03:34Z) - Joint Network Topology Inference via a Shared Graphon Model [24.077455621015552]
We consider the problem of estimating the topology of multiple networks from nodal observations.
We adopt a graphon as our random graph model, which is a nonparametric model from which graphs of potentially different sizes can be drawn.
arXiv Detail & Related papers (2022-09-17T02:38:58Z) - Graph Condensation via Receptive Field Distribution Matching [61.71711656856704]
This paper focuses on creating a small graph to represent the original graph, so that GNNs trained on the size-reduced graph can make accurate predictions.
We view the original graph as a distribution of receptive fields and aim to synthesize a small graph whose receptive fields share a similar distribution.
arXiv Detail & Related papers (2022-06-28T02:10:05Z) - Optimal Propagation for Graph Neural Networks [51.08426265813481]
We propose a bi-level optimization approach for learning the optimal graph structure.
We also explore a low-rank approximation model for further reducing the time complexity.
arXiv Detail & Related papers (2022-05-06T03:37:00Z) - Distributed Graph Learning with Smooth Data Priors [61.405131495287755]
We propose a novel distributed graph learning algorithm, which permits to infer a graph from signal observations on the nodes.
Our results show that the distributed approach has a lower communication cost than a centralised algorithm without compromising the accuracy in the inferred graph.
arXiv Detail & Related papers (2021-12-11T00:52:02Z) - Edge but not Least: Cross-View Graph Pooling [76.71497833616024]
This paper presents a cross-view graph pooling (Co-Pooling) method to better exploit crucial graph structure information.
Through cross-view interaction, edge-view pooling and node-view pooling seamlessly reinforce each other to learn more informative graph-level representations.
arXiv Detail & Related papers (2021-09-24T08:01:23Z) - Explicit Pairwise Factorized Graph Neural Network for Semi-Supervised
Node Classification [59.06717774425588]
We propose the Explicit Pairwise Factorized Graph Neural Network (EPFGNN), which models the whole graph as a partially observed Markov Random Field.
It contains explicit pairwise factors to model output-output relations and uses a GNN backbone to model input-output relations.
We conduct experiments on various datasets, which shows that our model can effectively improve the performance for semi-supervised node classification on graphs.
arXiv Detail & Related papers (2021-07-27T19:47:53Z) - Graph Pooling via Coarsened Graph Infomax [9.045707667111873]
We propose Coarsened GraphPool Infomaxing (CGI) to maximize the mutual information between the input and the coarsened graph of each pooling layer.
To achieve mutual information neural, we apply contrastive learning and propose a self-attention-based algorithm for learning positive and negative samples.
arXiv Detail & Related papers (2021-05-04T03:50:21Z) - Accurate Learning of Graph Representations with Graph Multiset Pooling [45.72542969364438]
We propose a Graph Multiset Transformer (GMT) that captures the interaction between nodes according to their structural dependencies.
Our experimental results show that GMT significantly outperforms state-of-the-art graph pooling methods on graph classification benchmarks.
arXiv Detail & Related papers (2021-02-23T07:45:58Z) - Multilayer Clustered Graph Learning [66.94201299553336]
We use contrastive loss as a data fidelity term, in order to properly aggregate the observed layers into a representative graph.
Experiments show that our method leads to a clustered clusters w.r.t.
We learn a clustering algorithm for solving clustering problems.
arXiv Detail & Related papers (2020-10-29T09:58:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.