Graph Neural Networks Including Sparse Interpretability
- URL: http://arxiv.org/abs/2007.00119v1
- Date: Tue, 30 Jun 2020 21:35:55 GMT
- Title: Graph Neural Networks Including Sparse Interpretability
- Authors: Chris Lin, Gerald J. Sun, Krishna C. Bulusu, Jonathan R. Dry and
Marylens Hernandez
- Abstract summary: We present a model-agnostic framework for interpreting important graph structure and node features.
Our GISST models achieve superior node feature and edge explanation precision in synthetic datasets.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Neural Networks (GNNs) are versatile, powerful machine learning methods
that enable graph structure and feature representation learning, and have
applications across many domains. For applications critically requiring
interpretation, attention-based GNNs have been leveraged. However, these
approaches either rely on specific model architectures or lack a joint
consideration of graph structure and node features in their interpretation.
Here we present a model-agnostic framework for interpreting important graph
structure and node features, Graph neural networks Including SparSe
inTerpretability (GISST). With any GNN model, GISST combines an attention
mechanism and sparsity regularization to yield an important subgraph and node
feature subset related to any graph-based task. Through a single self-attention
layer, a GISST model learns an importance probability for each node feature and
edge in the input graph. By including these importance probabilities in the
model loss function, the probabilities are optimized end-to-end and tied to the
task-specific performance. Furthermore, GISST sparsifies these importance
probabilities with entropy and L1 regularization to reduce noise in the input
graph topology and node features. Our GISST models achieve superior node
feature and edge explanation precision in synthetic datasets, as compared to
alternative interpretation approaches. Moreover, our GISST models are able to
identify important graph structure in real-world datasets. We demonstrate in
theory that edge feature importance and multiple edge types can be considered
by incorporating them into the GISST edge probability computation. By jointly
accounting for topology, node features, and edge features, GISST inherently
provides simple and relevant interpretations for any GNN models and tasks.
Related papers
- TANGNN: a Concise, Scalable and Effective Graph Neural Networks with Top-m Attention Mechanism for Graph Representation Learning [7.879217146851148]
We propose an innovative Graph Neural Network (GNN) architecture that integrates a Top-m attention mechanism aggregation component and a neighborhood aggregation component.
To assess the effectiveness of our proposed model, we have applied it to citation sentiment prediction, a novel task previously unexplored in the GNN field.
arXiv Detail & Related papers (2024-11-23T05:31:25Z) - Hyperbolic Benchmarking Unveils Network Topology-Feature Relationship in GNN Performance [0.5416466085090772]
We introduce a comprehensive benchmarking framework for graph machine learning.
We generate synthetic networks with realistic topological properties and node feature vectors.
Results highlight the dependency of model performance on the interplay between network structure and node features.
arXiv Detail & Related papers (2024-06-04T20:40:06Z) - DGNN: Decoupled Graph Neural Networks with Structural Consistency
between Attribute and Graph Embedding Representations [62.04558318166396]
Graph neural networks (GNNs) demonstrate a robust capability for representation learning on graphs with complex structures.
A novel GNNs framework, dubbed Decoupled Graph Neural Networks (DGNN), is introduced to obtain a more comprehensive embedding representation of nodes.
Experimental results conducted on several graph benchmark datasets verify DGNN's superiority in node classification task.
arXiv Detail & Related papers (2024-01-28T06:43:13Z) - GraphGLOW: Universal and Generalizable Structure Learning for Graph
Neural Networks [72.01829954658889]
This paper introduces the mathematical definition of this novel problem setting.
We devise a general framework that coordinates a single graph-shared structure learner and multiple graph-specific GNNs.
The well-trained structure learner can directly produce adaptive structures for unseen target graphs without any fine-tuning.
arXiv Detail & Related papers (2023-06-20T03:33:22Z) - Topological Pooling on Graphs [24.584372324701885]
Graph neural networks (GNNs) have demonstrated a significant success in various graph learning tasks.
We propose a novel topological pooling layer and witness complex-based topological embedding mechanism.
We show that Wit-TopoPool significantly outperforms all competitors across all datasets.
arXiv Detail & Related papers (2023-03-25T19:30:46Z) - Simplifying approach to Node Classification in Graph Neural Networks [7.057970273958933]
We decouple the node feature aggregation step and depth of graph neural network, and empirically analyze how different aggregated features play a role in prediction performance.
We show that not all features generated via aggregation steps are useful, and often using these less informative features can be detrimental to the performance of the GNN model.
We present a simple and shallow model, Feature Selection Graph Neural Network (FSGNN), and show empirically that the proposed model achieves comparable or even higher accuracy than state-of-the-art GNN models.
arXiv Detail & Related papers (2021-11-12T14:53:22Z) - Explicit Pairwise Factorized Graph Neural Network for Semi-Supervised
Node Classification [59.06717774425588]
We propose the Explicit Pairwise Factorized Graph Neural Network (EPFGNN), which models the whole graph as a partially observed Markov Random Field.
It contains explicit pairwise factors to model output-output relations and uses a GNN backbone to model input-output relations.
We conduct experiments on various datasets, which shows that our model can effectively improve the performance for semi-supervised node classification on graphs.
arXiv Detail & Related papers (2021-07-27T19:47:53Z) - Improving Graph Neural Networks with Simple Architecture Design [7.057970273958933]
We introduce several key design strategies for graph neural networks.
We present a simple and shallow model, Feature Selection Graph Neural Network (FSGNN)
We show that the proposed model outperforms other state of the art GNN models and achieves up to 64% improvements in accuracy on node classification tasks.
arXiv Detail & Related papers (2021-05-17T06:46:01Z) - GraphSVX: Shapley Value Explanations for Graph Neural Networks [81.83769974301995]
Graph Neural Networks (GNNs) achieve significant performance for various learning tasks on geometric data.
In this paper, we propose a unified framework satisfied by most existing GNN explainers.
We introduce GraphSVX, a post hoc local model-agnostic explanation method specifically designed for GNNs.
arXiv Detail & Related papers (2021-04-18T10:40:37Z) - Learning to Drop: Robust Graph Neural Network via Topological Denoising [50.81722989898142]
We propose PTDNet, a parameterized topological denoising network, to improve the robustness and generalization performance of Graph Neural Networks (GNNs)
PTDNet prunes task-irrelevant edges by penalizing the number of edges in the sparsified graph with parameterized networks.
We show that PTDNet can improve the performance of GNNs significantly and the performance gain becomes larger for more noisy datasets.
arXiv Detail & Related papers (2020-11-13T18:53:21Z) - Graphs, Convolutions, and Neural Networks: From Graph Filters to Graph
Neural Networks [183.97265247061847]
We leverage graph signal processing to characterize the representation space of graph neural networks (GNNs)
We discuss the role of graph convolutional filters in GNNs and show that any architecture built with such filters has the fundamental properties of permutation equivariance and stability to changes in the topology.
We also study the use of GNNs in recommender systems and learning decentralized controllers for robot swarms.
arXiv Detail & Related papers (2020-03-08T13:02:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.