Structural Graph Neural Networks with Anatomical Priors for Explainable Chest X-ray Diagnosis
- URL: http://arxiv.org/abs/2601.11987v1
- Date: Sat, 17 Jan 2026 09:41:07 GMT
- Title: Structural Graph Neural Networks with Anatomical Priors for Explainable Chest X-ray Diagnosis
- Authors: Khaled Berkani,
- Abstract summary: We present a structural graph reasoning framework that incorporates explicit anatomical priors for explainable vision-based diagnosis.<n>We introduce a custom structural propagation mechanism that explicitly models relative spatial relations as part of the reasoning process.<n>The framework is domain-agnostic and aligns with the broader vision of graph-based reasoning across artificial intelligence systems.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a structural graph reasoning framework that incorporates explicit anatomical priors for explainable vision-based diagnosis. Convolutional feature maps are reinterpreted as patch-level graphs, where nodes encode both appearance and spatial coordinates, and edges reflect local structural adjacency. Unlike conventional graph neural networks that rely on generic message passing, we introduce a custom structural propagation mechanism that explicitly models relative spatial relations as part of the reasoning process. This design enables the graph to act as an inductive bias for structured inference rather than a passive relational representation. The proposed model jointly supports node-level lesion-aware predictions and graph-level diagnostic reasoning, yielding intrinsic explainability through learned node importance scores without relying on post-hoc visualization techniques. We demonstrate the approach through a chest X-ray case study, illustrating how structural priors guide relational reasoning and improve interpretability. While evaluated in a medical imaging context, the framework is domain-agnostic and aligns with the broader vision of graph-based reasoning across artificial intelligence systems. This work contributes to the growing body of research exploring graphs as computational substrates for structure-aware and explainable learning.
Related papers
- From Priors to Predictions: Explaining and Visualizing Human Reasoning in a Graph Neural Network Framework [0.32834818175343855]
We formalize inductive biases as explicit, manipulable priors over structure and abstraction.<n>We show that differences in graph-based priors can explain individual differences in human solutions.<n>This work provides a principled, interpretable framework for modeling the representational assumptions and computational dynamics underlying generalization.
arXiv Detail & Related papers (2025-12-19T05:56:48Z) - On Discprecncies between Perturbation Evaluations of Graph Neural
Network Attributions [49.8110352174327]
We assess attribution methods from a perspective not previously explored in the graph domain: retraining.
The core idea is to retrain the network on important (or not important) relationships as identified by the attributions.
We run our analysis on four state-of-the-art GNN attribution methods and five synthetic and real-world graph classification datasets.
arXiv Detail & Related papers (2024-01-01T02:03:35Z) - Dynamic Graph Enhanced Contrastive Learning for Chest X-ray Report
Generation [92.73584302508907]
We propose a knowledge graph with Dynamic structure and nodes to facilitate medical report generation with Contrastive Learning.
In detail, the fundamental structure of our graph is pre-constructed from general knowledge.
Each image feature is integrated with its very own updated graph before being fed into the decoder module for report generation.
arXiv Detail & Related papers (2023-03-18T03:53:43Z) - Diagrammatization and Abduction to Improve AI Interpretability With Domain-Aligned Explanations for Medical Diagnosis [9.904990343076433]
We propose improving domain alignment with diagrammatic and abductive reasoning to reduce the interpretability gap.<n>We developed DiagramNet to predict cardiac diagnoses from heart auscultation, select the best-fitting hypothesis based on criteria evaluation, and explain with clinically-relevant murmur diagrams.
arXiv Detail & Related papers (2023-02-02T17:23:28Z) - Automated Coronary Arteries Labeling Via Geometric Deep Learning [13.515293812745343]
We propose an intuitive graph representation method, well suited to use with 3D coordinate data obtained from angiography scans.
We subsequently seek to analyze subject-specific graphs using geometric deep learning.
The proposed models leverage expert annotated labels from 141 patients to learn representations of each coronary segment, while capturing the effects of anatomical variability within the training data.
arXiv Detail & Related papers (2022-12-01T09:31:08Z) - Towards Explanation for Unsupervised Graph-Level Representation Learning [108.31036962735911]
Existing explanation methods focus on the supervised settings, eg, node classification and graph classification, while the explanation for unsupervised graph-level representation learning is still unexplored.
In this paper, we advance the Information Bottleneck principle (IB) to tackle the proposed explanation problem for unsupervised graph representations, which leads to a novel principle, textitUnsupervised Subgraph Information Bottleneck (USIB)
We also theoretically analyze the connection between graph representations and explanatory subgraphs on the label space, which reveals that the robustness of representations benefit the fidelity of explanatory subgraphs.
arXiv Detail & Related papers (2022-05-20T02:50:15Z) - Graph-in-Graph (GiG): Learning interpretable latent graphs in
non-Euclidean domain for biological and healthcare applications [52.65389473899139]
Graphs are a powerful tool for representing and analyzing unstructured, non-Euclidean data ubiquitous in the healthcare domain.
Recent works have shown that considering relationships between input data samples have a positive regularizing effect for the downstream task.
We propose Graph-in-Graph (GiG), a neural network architecture for protein classification and brain imaging applications.
arXiv Detail & Related papers (2022-04-01T10:01:37Z) - An explainability framework for cortical surface-based deep learning [110.83289076967895]
We develop a framework for cortical surface-based deep learning.
First, we adapted a perturbation-based approach for use with surface data.
We show that our explainability framework is not only able to identify important features and their spatial location but that it is also reliable and valid.
arXiv Detail & Related papers (2022-03-15T23:16:49Z) - Structure-Preserving Graph Kernel for Brain Network Classification [38.707747282886935]
We show how to leverage the naturally available structure within the graph representation to encode prior knowledge in the kernel.
The proposed approach has the advantage of being clinically interpretable.
arXiv Detail & Related papers (2021-11-21T12:03:19Z) - Structural Landmarking and Interaction Modelling: on Resolution Dilemmas
in Graph Classification [50.83222170524406]
We study the intrinsic difficulty in graph classification under the unified concept of resolution dilemmas''
We propose SLIM'', an inductive neural network model for Structural Landmarking and Interaction Modelling.
arXiv Detail & Related papers (2020-06-29T01:01:42Z) - Visualization for Histopathology Images using Graph Convolutional Neural
Networks [1.8939984161954087]
We adopt an approach to model histology tissue as a graph of nuclei and develop a graph convolutional network framework for disease diagnosis.
Our visualization of such networks trained to distinguish between invasive and in-situ breast cancers, and Gleason 3 and 4 prostate cancers generate interpretable visual maps.
arXiv Detail & Related papers (2020-06-16T19:14:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.