Interpretable Sparsification of Brain Graphs: Better Practices and
Effective Designs for Graph Neural Networks
- URL: http://arxiv.org/abs/2306.14375v1
- Date: Mon, 26 Jun 2023 01:37:10 GMT
- Title: Interpretable Sparsification of Brain Graphs: Better Practices and
Effective Designs for Graph Neural Networks
- Authors: Gaotang Li, Marlena Duda, Xiang Zhang, Danai Koutra, Yujun Yan
- Abstract summary: dense brain graphs pose computational challenges including high runtime and memory usage and limited interpretability.
We propose a new model, Interpretable Graph Sparsification, which enhances graph classification performance by up to 5.1% with 55.0% fewer edges.
- Score: 15.101250958437038
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Brain graphs, which model the structural and functional relationships between
brain regions, are crucial in neuroscientific and clinical applications
involving graph classification. However, dense brain graphs pose computational
challenges including high runtime and memory usage and limited
interpretability. In this paper, we investigate effective designs in Graph
Neural Networks (GNNs) to sparsify brain graphs by eliminating noisy edges.
While prior works remove noisy edges based on explainability or task-irrelevant
properties, their effectiveness in enhancing performance with sparsified graphs
is not guaranteed. Moreover, existing approaches often overlook collective edge
removal across multiple graphs.
To address these issues, we introduce an iterative framework to analyze
different sparsification models. Our findings are as follows: (i) methods
prioritizing interpretability may not be suitable for graph sparsification as
they can degrade GNNs' performance in graph classification tasks; (ii)
simultaneously learning edge selection with GNN training is more beneficial
than post-training; (iii) a shared edge selection across graphs outperforms
separate selection for each graph; and (iv) task-relevant gradient information
aids in edge selection. Based on these insights, we propose a new model,
Interpretable Graph Sparsification (IGS), which enhances graph classification
performance by up to 5.1% with 55.0% fewer edges. The retained edges identified
by IGS provide neuroscientific interpretations and are supported by
well-established literature.
Related papers
- Community-Centric Graph Unlearning [10.906555492206959]
We propose a novel Graph Structure Mapping Unlearning paradigm (GSMU) and a novel method based on it named Community-centric Graph Eraser (CGE)
CGE maps community subgraphs to nodes, thereby enabling the reconstruction of a node-level unlearning operation within a reduced mapped graph.
arXiv Detail & Related papers (2024-08-19T05:37:35Z) - Spectral Greedy Coresets for Graph Neural Networks [61.24300262316091]
The ubiquity of large-scale graphs in node-classification tasks hinders the real-world applications of Graph Neural Networks (GNNs)
This paper studies graph coresets for GNNs and avoids the interdependence issue by selecting ego-graphs based on their spectral embeddings.
Our spectral greedy graph coreset (SGGC) scales to graphs with millions of nodes, obviates the need for model pre-training, and applies to low-homophily graphs.
arXiv Detail & Related papers (2024-05-27T17:52:12Z) - Hypergraph-enhanced Dual Semi-supervised Graph Classification [14.339207883093204]
We propose a Hypergraph-Enhanced DuAL framework named HEAL for semi-supervised graph classification.
To better explore the higher-order relationships among nodes, we design a hypergraph structure learning to adaptively learn complex node dependencies.
Based on the learned hypergraph, we introduce a line graph to capture the interaction between hyperedges.
arXiv Detail & Related papers (2024-05-08T02:44:13Z) - Two Heads Are Better Than One: Boosting Graph Sparse Training via
Semantic and Topological Awareness [80.87683145376305]
Graph Neural Networks (GNNs) excel in various graph learning tasks but face computational challenges when applied to large-scale graphs.
We propose Graph Sparse Training ( GST), which dynamically manipulates sparsity at the data level.
GST produces a sparse graph with maximum topological integrity and no performance degradation.
arXiv Detail & Related papers (2024-02-02T09:10:35Z) - Balanced Graph Structure Information for Brain Disease Detection [6.799894169098717]
We propose Bargrain, which models two graph structures: filtered correlation matrix and optimal sample graph using graph convolution networks (GCNs)
Based on our extensive experiment, Bargrain outperforms state-of-the-art methods in classification tasks on brain disease datasets, as measured by average F1 scores.
arXiv Detail & Related papers (2023-12-30T06:50:52Z) - Self-supervision meets kernel graph neural models: From architecture to
augmentations [36.388069423383286]
We improve the design and learning of kernel graph neural networks (KGNNs)
We develop a novel structure-preserving graph data augmentation method called latent graph augmentation (LGA)
Our proposed model achieves competitive performance comparable to or sometimes outperforming state-of-the-art graph representation learning frameworks.
arXiv Detail & Related papers (2023-10-17T14:04:22Z) - An Empirical Study of Retrieval-enhanced Graph Neural Networks [48.99347386689936]
Graph Neural Networks (GNNs) are effective tools for graph representation learning.
We propose a retrieval-enhanced scheme called GRAPHRETRIEVAL, which is agnostic to the choice of graph neural network models.
We conduct comprehensive experiments over 13 datasets, and we observe that GRAPHRETRIEVAL is able to reach substantial improvements over existing GNNs.
arXiv Detail & Related papers (2022-06-01T09:59:09Z) - Towards Unsupervised Deep Graph Structure Learning [67.58720734177325]
We propose an unsupervised graph structure learning paradigm, where the learned graph topology is optimized by data itself without any external guidance.
Specifically, we generate a learning target from the original data as an "anchor graph", and use a contrastive loss to maximize the agreement between the anchor graph and the learned graph.
arXiv Detail & Related papers (2022-01-17T11:57:29Z) - Graph Contrastive Learning with Augmentations [109.23158429991298]
We propose a graph contrastive learning (GraphCL) framework for learning unsupervised representations of graph data.
We show that our framework can produce graph representations of similar or better generalizability, transferrability, and robustness compared to state-of-the-art methods.
arXiv Detail & Related papers (2020-10-22T20:13:43Z) - GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training [62.73470368851127]
Graph representation learning has emerged as a powerful technique for addressing real-world problems.
We design Graph Contrastive Coding -- a self-supervised graph neural network pre-training framework.
We conduct experiments on three graph learning tasks and ten graph datasets.
arXiv Detail & Related papers (2020-06-17T16:18:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.