Promise and Limitations of Supervised Optimal Transport-Based Graph
Summarization via Information Theoretic Measures
- URL: http://arxiv.org/abs/2305.07138v1
- Date: Thu, 11 May 2023 21:03:34 GMT
- Title: Promise and Limitations of Supervised Optimal Transport-Based Graph
Summarization via Information Theoretic Measures
- Authors: Sepideh Neshatfar, Abram Magner, Salimeh Yasaei Sekeh
- Abstract summary: Graph summarization is the problem of producing smaller graph representations of an input graph dataset.
We consider the problem of supervised graph summarization, wherein by using information theoretic measures we seek to preserve relevant information about a class label.
We propose a summarization method that incorporates mutual information estimates between random variables associated with sample graphs and class labels into the optimal transport compression framework.
- Score: 3.179831861897336
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph summarization is the problem of producing smaller graph representations
of an input graph dataset, in such a way that the smaller compressed graphs
capture relevant structural information for downstream tasks. There is a recent
graph summarization method that formulates an optimal transport-based framework
that allows prior information about node, edge, and attribute importance (never
defined in that work) to be incorporated into the graph summarization process.
However, very little is known about the statistical properties of this
framework. To elucidate this question, we consider the problem of supervised
graph summarization, wherein by using information theoretic measures we seek to
preserve relevant information about a class label. To gain a theoretical
perspective on the supervised summarization problem itself, we first formulate
it in terms of maximizing the Shannon mutual information between the summarized
graph and the class label. We show an NP-hardness of approximation result for
this problem, thereby constraining what one should expect from proposed
solutions. We then propose a summarization method that incorporates mutual
information estimates between random variables associated with sample graphs
and class labels into the optimal transport compression framework. We
empirically show performance improvements over previous works in terms of
classification accuracy and time on synthetic and certain real datasets. We
also theoretically explore the limitations of the optimal transport approach
for the supervised summarization problem and we show that it fails to satisfy a
certain desirable information monotonicity property.
Related papers
- Structure-free Graph Condensation: From Large-scale Graphs to Condensed
Graph-free Data [91.27527985415007]
Existing graph condensation methods rely on the joint optimization of nodes and structures in the condensed graph.
We advocate a new Structure-Free Graph Condensation paradigm, named SFGC, to distill a large-scale graph into a small-scale graph node set.
arXiv Detail & Related papers (2023-06-05T07:53:52Z) - Optimal Propagation for Graph Neural Networks [51.08426265813481]
We propose a bi-level optimization approach for learning the optimal graph structure.
We also explore a low-rank approximation model for further reducing the time complexity.
arXiv Detail & Related papers (2022-05-06T03:37:00Z) - Unbiased Graph Embedding with Biased Graph Observations [52.82841737832561]
We propose a principled new way for obtaining unbiased representations by learning from an underlying bias-free graph.
Based on this new perspective, we propose two complementary methods for uncovering such an underlying graph.
arXiv Detail & Related papers (2021-10-26T18:44:37Z) - Effective and Efficient Graph Learning for Multi-view Clustering [173.8313827799077]
We propose an effective and efficient graph learning model for multi-view clustering.
Our method exploits the view-similar between graphs of different views by the minimization of tensor Schatten p-norm.
Our proposed algorithm is time-economical and obtains the stable results and scales well with the data size.
arXiv Detail & Related papers (2021-08-15T13:14:28Z) - A Graph Data Augmentation Strategy with Entropy Preserving [11.886325179121226]
We introduce a novel graph entropy definition as a quantitative index to evaluate feature information among a graph.
Under considerations of preserving graph entropy, we propose an effective strategy to generate training data using a perturbed mechanism.
Our proposed approach significantly enhances the robustness and generalization ability of GCNs during the training process.
arXiv Detail & Related papers (2021-07-13T12:58:32Z) - Maximum Entropy Weighted Independent Set Pooling for Graph Neural
Networks [7.398195748292981]
We propose a novel pooling layer for graph neural networks based on maximizing the mutual information between the pooled graph and the input graph.
We show that the input graph to the pooling layer can be viewed as a representation of a noisy communication channel.
We show that reaching the maximum mutual information is equivalent to finding a maximum weight independent set of the graph.
arXiv Detail & Related papers (2021-07-03T11:19:28Z) - Learning Graphs from Smooth Signals under Moment Uncertainty [23.868075779606425]
We consider the problem of inferring the graph structure from a given set of graph signals.
Traditional graph learning models do not take this distributional uncertainty into account.
arXiv Detail & Related papers (2021-05-12T06:47:34Z) - Multilayer Clustered Graph Learning [66.94201299553336]
We use contrastive loss as a data fidelity term, in order to properly aggregate the observed layers into a representative graph.
Experiments show that our method leads to a clustered clusters w.r.t.
We learn a clustering algorithm for solving clustering problems.
arXiv Detail & Related papers (2020-10-29T09:58:02Z) - Wasserstein-based Graph Alignment [56.84964475441094]
We cast a new formulation for the one-to-many graph alignment problem, which aims at matching a node in the smaller graph with one or more nodes in the larger graph.
We show that our method leads to significant improvements with respect to the state-of-the-art algorithms for each of these tasks.
arXiv Detail & Related papers (2020-03-12T22:31:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.