Structural Landmarking and Interaction Modelling: on Resolution Dilemmas
in Graph Classification
- URL: http://arxiv.org/abs/2006.15763v1
- Date: Mon, 29 Jun 2020 01:01:42 GMT
- Title: Structural Landmarking and Interaction Modelling: on Resolution Dilemmas
in Graph Classification
- Authors: Kai Zhang, Yaokang Zhu, Jun Wang, Jie Zhang, Hongyuan Zha
- Abstract summary: We study the intrinsic difficulty in graph classification under the unified concept of resolution dilemmas''
We propose SLIM'', an inductive neural network model for Structural Landmarking and Interaction Modelling.
- Score: 50.83222170524406
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph neural networks are promising architecture for learning and inference
with graph-structured data. Yet difficulties in modelling the ``parts'' and
their ``interactions'' still persist in terms of graph classification, where
graph-level representations are usually obtained by squeezing the whole graph
into a single vector through graph pooling. From complex systems point of view,
mixing all the parts of a system together can affect both model
interpretability and predictive performance, because properties of a complex
system arise largely from the interaction among its components. We analyze the
intrinsic difficulty in graph classification under the unified concept of
``resolution dilemmas'' with learning theoretic recovery guarantees, and
propose ``SLIM'', an inductive neural network model for Structural Landmarking
and Interaction Modelling. It turns out, that by solving the resolution
dilemmas, and leveraging explicit interacting relation between component parts
of a graph to explain its complexity, SLIM is more interpretable, accurate, and
offers new insight in graph representation learning.
Related papers
- Self-Supervised Graph Neural Networks for Enhanced Feature Extraction in Heterogeneous Information Networks [16.12856816023414]
This paper explores the applications and challenges of graph neural networks (GNNs) in processing complex graph data brought about by the rapid development of the Internet.
By introducing a self-supervisory mechanism, it is expected to improve the adaptability of existing models to the diversity and complexity of graph data.
arXiv Detail & Related papers (2024-10-23T07:14:37Z) - Introducing Diminutive Causal Structure into Graph Representation Learning [19.132025125620274]
We introduce a novel method that enables Graph Neural Networks (GNNs) to glean insights from specialized diminutive causal structures.
Our method specifically extracts causal knowledge from the model representation of these diminutive causal structures.
arXiv Detail & Related papers (2024-06-13T00:18:20Z) - GNNAnatomy: Systematic Generation and Evaluation of Multi-Level Explanations for Graph Neural Networks [20.05098366613674]
We introduce GNNAnatomy, a visual analytics system designed to generate and evaluate multi-level explanations for graph classification tasks.
GNNAnatomy uses graphlets, primitive graph substructures, to identify the most critical substructures in a graph class by analyzing the correlation between GNN predictions and graphlet frequencies.
We demonstrate the effectiveness of GNNAnatomy through case studies on synthetic and real-world graph datasets from sociology and biology domains.
arXiv Detail & Related papers (2024-06-06T23:09:54Z) - GrannGAN: Graph annotation generative adversarial networks [72.66289932625742]
We consider the problem of modelling high-dimensional distributions and generating new examples of data with complex relational feature structure coherent with a graph skeleton.
The model we propose tackles the problem of generating the data features constrained by the specific graph structure of each data point by splitting the task into two phases.
In the first it models the distribution of features associated with the nodes of the given graph, in the second it complements the edge features conditionally on the node features.
arXiv Detail & Related papers (2022-12-01T11:49:07Z) - Neural Graphical Models [2.6842860806280058]
We introduce Neural Graphical Models (NGMs) to represent complex feature dependencies with reasonable computational costs.
We capture the dependency structure between the features along with their complex function representations by using a neural network as a multi-task learning framework.
NGMs can fit generic graph structures including directed, undirected and mixed-edge graphs as well as support mixed input data types.
arXiv Detail & Related papers (2022-10-02T07:59:51Z) - Towards Explanation for Unsupervised Graph-Level Representation Learning [108.31036962735911]
Existing explanation methods focus on the supervised settings, eg, node classification and graph classification, while the explanation for unsupervised graph-level representation learning is still unexplored.
In this paper, we advance the Information Bottleneck principle (IB) to tackle the proposed explanation problem for unsupervised graph representations, which leads to a novel principle, textitUnsupervised Subgraph Information Bottleneck (USIB)
We also theoretically analyze the connection between graph representations and explanatory subgraphs on the label space, which reveals that the robustness of representations benefit the fidelity of explanatory subgraphs.
arXiv Detail & Related papers (2022-05-20T02:50:15Z) - Learning Graph Structure from Convolutional Mixtures [119.45320143101381]
We propose a graph convolutional relationship between the observed and latent graphs, and formulate the graph learning task as a network inverse (deconvolution) problem.
In lieu of eigendecomposition-based spectral methods, we unroll and truncate proximal gradient iterations to arrive at a parameterized neural network architecture that we call a Graph Deconvolution Network (GDN)
GDNs can learn a distribution of graphs in a supervised fashion, perform link prediction or edge-weight regression tasks by adapting the loss function, and they are inherently inductive.
arXiv Detail & Related papers (2022-05-19T14:08:15Z) - SUGAR: Subgraph Neural Network with Reinforcement Pooling and
Self-Supervised Mutual Information Mechanism [33.135006052347194]
This paper presents a novel hierarchical subgraph-level selection and embedding based graph neural network for graph classification, namely SUGAR.
SUGAR reconstructs a sketched graph by extracting striking subgraphs as the representative part of the original graph to reveal subgraph-level patterns.
To differentiate subgraph representations among graphs, we present a self-supervised mutual information mechanism to encourage subgraph embedding.
arXiv Detail & Related papers (2021-01-20T15:06:16Z) - GraphOpt: Learning Optimization Models of Graph Formation [72.75384705298303]
We propose an end-to-end framework that learns an implicit model of graph structure formation and discovers an underlying optimization mechanism.
The learned objective can serve as an explanation for the observed graph properties, thereby lending itself to transfer across different graphs within a domain.
GraphOpt poses link formation in graphs as a sequential decision-making process and solves it using maximum entropy inverse reinforcement learning algorithm.
arXiv Detail & Related papers (2020-07-07T16:51:39Z) - Tensor Graph Convolutional Networks for Multi-relational and Robust
Learning [74.05478502080658]
This paper introduces a tensor-graph convolutional network (TGCN) for scalable semi-supervised learning (SSL) from data associated with a collection of graphs, that are represented by a tensor.
The proposed architecture achieves markedly improved performance relative to standard GCNs, copes with state-of-the-art adversarial attacks, and leads to remarkable SSL performance over protein-to-protein interaction networks.
arXiv Detail & Related papers (2020-03-15T02:33:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.