Information Flow in Graph Neural Networks: A Clinical Triage Use Case
- URL: http://arxiv.org/abs/2309.06081v1
- Date: Tue, 12 Sep 2023 09:18:12 GMT
- Title: Information Flow in Graph Neural Networks: A Clinical Triage Use Case
- Authors: V\'ictor Valls, Mykhaylo Zayats, Alessandra Pascale
- Abstract summary: Graph Neural Networks (GNNs) have gained popularity in healthcare and other domains due to their ability to process multi-modal and multi-relational graphs.
We investigate how the flow of embedding information within GNNs affects the prediction of links in Knowledge Graphs (KGs)
Our results demonstrate that incorporating domain knowledge into the GNN connectivity leads to better performance than using the same connectivity as the KG or allowing unconstrained embedding propagation.
- Score: 49.86931948849343
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph Neural Networks (GNNs) have gained popularity in healthcare and other
domains due to their ability to process multi-modal and multi-relational
graphs. However, efficient training of GNNs remains challenging, with several
open research questions. In this paper, we investigate how the flow of
embedding information within GNNs affects the prediction of links in Knowledge
Graphs (KGs). Specifically, we propose a mathematical model that decouples the
GNN connectivity from the connectivity of the graph data and evaluate the
performance of GNNs in a clinical triage use case. Our results demonstrate that
incorporating domain knowledge into the GNN connectivity leads to better
performance than using the same connectivity as the KG or allowing
unconstrained embedding propagation. Moreover, we show that negative edges play
a crucial role in achieving good predictions, and that using too many GNN
layers can degrade performance.
Related papers
- PROXI: Challenging the GNNs for Link Prediction [3.8233569758620063]
We introduce PROXI, which leverages proximity information of node pairs in both graph and attribute spaces.
Standard machine learning (ML) models perform competitively, even outperforming cutting-edge GNN models.
We show that augmenting traditional GNNs with PROXI significantly boosts their link prediction performance.
arXiv Detail & Related papers (2024-10-02T17:57:38Z) - DEGREE: Decomposition Based Explanation For Graph Neural Networks [55.38873296761104]
We propose DEGREE to provide a faithful explanation for GNN predictions.
By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction.
We also design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods.
arXiv Detail & Related papers (2023-05-22T10:29:52Z) - Discovering the Representation Bottleneck of Graph Neural Networks from
Multi-order Interactions [51.597480162777074]
Graph neural networks (GNNs) rely on the message passing paradigm to propagate node features and build interactions.
Recent works point out that different graph learning tasks require different ranges of interactions between nodes.
We study two common graph construction methods in scientific domains, i.e., emphK-nearest neighbor (KNN) graphs and emphfully-connected (FC) graphs.
arXiv Detail & Related papers (2022-05-15T11:38:14Z) - Graph Neural Networks for Graphs with Heterophily: A Survey [98.45621222357397]
We provide a comprehensive review of graph neural networks (GNNs) for heterophilic graphs.
Specifically, we propose a systematic taxonomy that essentially governs existing heterophilic GNN models.
We discuss the correlation between graph heterophily and various graph research domains, aiming to facilitate the development of more effective GNNs.
arXiv Detail & Related papers (2022-02-14T23:07:47Z) - Toward the Analysis of Graph Neural Networks [1.0412114420493723]
Graph Neural Networks (GNNs) have emerged as a robust framework for graph-structured data analysis.
This paper proposes an approach to analyze GNNs by converting them into Feed Forward Neural Networks (FFNNs) and reusing existing FFNNs analyses.
arXiv Detail & Related papers (2022-01-01T04:59:49Z) - Ego-GNNs: Exploiting Ego Structures in Graph Neural Networks [12.97622530614215]
We show that Ego-GNNs are capable of recognizing closed triangles, which is essential given the prominence of transitivity in real-world graphs.
In particular, we show that Ego-GNNs are capable of recognizing closed triangles, which is essential given the prominence of transitivity in real-world graphs.
arXiv Detail & Related papers (2021-07-22T23:42:23Z) - Optimization of Graph Neural Networks: Implicit Acceleration by Skip
Connections and More Depth [57.10183643449905]
Graph Neural Networks (GNNs) have been studied from the lens of expressive power and generalization.
We study the dynamics of GNNs by studying deep skip optimization.
Our results provide first theoretical support for the success of GNNs.
arXiv Detail & Related papers (2021-05-10T17:59:01Z) - GNNLens: A Visual Analytics Approach for Prediction Error Diagnosis of
Graph Neural Networks [42.222552078920216]
Graph Neural Networks (GNNs) aim to extend deep learning techniques to graph data.
GNNs behave like a black box with their details hidden from model developers and users.
It is therefore difficult to diagnose possible errors of GNNs.
This paper fills the research gap with an interactive visual analysis tool, GNNLens, to assist model developers and users in understanding and analyzing GNNs.
arXiv Detail & Related papers (2020-11-22T16:09:08Z) - Incorporating Symbolic Domain Knowledge into Graph Neural Networks [18.798760815214877]
Deep neural networks specifically developed for graph-structured data (Graph-based Neural Networks, or GNNs)
We investigate this aspect of GNNs empirically by employing an operation we term "vertex-enrichment" and denote the corresponding GNNs as "VEGNNs"
arXiv Detail & Related papers (2020-10-23T16:22:21Z) - XGNN: Towards Model-Level Explanations of Graph Neural Networks [113.51160387804484]
Graphs neural networks (GNNs) learn node features by aggregating and combining neighbor information.
GNNs are mostly treated as black-boxes and lack human intelligible explanations.
We propose a novel approach, known as XGNN, to interpret GNNs at the model-level.
arXiv Detail & Related papers (2020-06-03T23:52:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.