An explainable three dimension framework to uncover learning patterns: A unified look in variable sulci recognition
- URL: http://arxiv.org/abs/2309.00903v5
- Date: Thu, 28 Nov 2024 22:52:14 GMT
- Title: An explainable three dimension framework to uncover learning patterns: A unified look in variable sulci recognition
- Authors: Michail Mamalakis, Heloise de Vareilles, Atheer AI-Manea, Samantha C. Mitchell, Ingrid Arartz, Lynn Egeland Morch-Johnsen, Jane Garrison, Jon Simons, Pietro Lio, John Suckling, Graham Murray,
- Abstract summary: We develop an explainable artificial intelligence (XAI) 3D-Framework capable of providing accurate, low-complexity global explanations.<n>Our framework integrates statistical features (Shape) and XAI methods (GradCam and SHAP) with dimensionality reduction, ensuring that explanations reflect both model learning and cohort-specific variability.<n>These robust explanations facilitated the identification of critical sub-regions, including the posterior temporal and internal parietal regions, as well as the cingulate region and thalamus.
- Score: 2.960322639147262
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: The significant features identified in a representative subset of the dataset during the learning process of an artificial intelligence model are referred to as a 'global' explanation. 3D global explanations are crucial in neuroimaging, where a complex representational space demands more than basic 2D interpretations. However, current studies in the literature often lack the accuracy, comprehensibility, and 3D global explanations needed in neuroimaging and beyond. To address this gap, we developed an explainable artificial intelligence (XAI) 3D-Framework capable of providing accurate, low-complexity global explanations. We evaluated the framework using various 3D deep learning models trained on a well-annotated cohort of 596 structural MRIs. The binary classification task focused on detecting the presence or absence of the paracingulate sulcus, a highly variable brain structure associated with psychosis. Our framework integrates statistical features (Shape) and XAI methods (GradCam and SHAP) with dimensionality reduction, ensuring that explanations reflect both model learning and cohort-specific variability. By combining Shape, GradCam, and SHAP, our framework reduces inter-method variability, enhancing the faithfulness and reliability of global explanations. These robust explanations facilitated the identification of critical sub-regions, including the posterior temporal and internal parietal regions, as well as the cingulate region and thalamus, suggesting potential genetic or developmental influences. Our XAI 3D-Framework leverages global explanations to uncover the broader developmental context of specific cortical features. This approach advances the fields of deep learning and neuroscience by offering insights into normative brain development and atypical trajectories linked to mental illness, paving the way for more reliable and interpretable AI applications in neuroimaging.
Related papers
- From Predictions to Explanations: Explainable AI for Autism Diagnosis and Identification of Critical Brain Regions [0.0]
We propose a computer-aided diagnostic framework with two modules.<n>The first module leverages a deep learning model fine-tuned through cross-domain transfer learning for ASD classification.<n>The second module focuses on interpreting the model decisions and identifying critical brain regions.
arXiv Detail & Related papers (2025-09-04T03:48:10Z) - Evaluation of 3D Counterfactual Brain MRI Generation [20.30513265599243]
We convert six generative models into 3D counterfactual approaches by incorporating an anatomy-guided framework based on a causal graph.<n>Our results indicate that anatomically grounded conditioning successfully modifies the targeted anatomical regions.
arXiv Detail & Related papers (2025-08-04T20:20:59Z) - Mapping minds not averages: a scalable subject-specific manifold learning framework for neuroimaging data [0.0]
We introduce a manifold learning framework that can capture subject-specific spatial variations across both structured and temporally unstructured data.
We show that the framework scales efficiently to large datasets and generalizes well to new subjects.
These findings suggest that our framework can uncover clinically relevant subject-specific brain activity patterns.
arXiv Detail & Related papers (2025-04-30T21:40:54Z) - BrainPrompt: Multi-Level Brain Prompt Enhancement for Neurological Condition Identification [18.50236178374499]
BrainPrompt is an innovative framework that enhances Graph Neural Networks (GNNs)
BrainPrompt integrates Large Language Models (LLMs) with knowledge-driven prompts.
We evaluate BrainPrompt on two resting-state functional Magnetic Resonance Imaging (fMRI) datasets from neurological disorders.
arXiv Detail & Related papers (2025-04-12T06:45:16Z) - Knowledge-Guided Prompt Learning for Lifespan Brain MR Image Segmentation [53.70131202548981]
We present a two-step segmentation framework employing Knowledge-Guided Prompt Learning (KGPL) for brain MRI.
Specifically, we first pre-train segmentation models on large-scale datasets with sub-optimal labels.
The introduction of knowledge-wise prompts captures semantic relationships between anatomical variability and biological processes.
arXiv Detail & Related papers (2024-07-31T04:32:43Z) - Enhancing Generalizability of Representation Learning for Data-Efficient 3D Scene Understanding [50.448520056844885]
We propose a generative Bayesian network to produce diverse synthetic scenes with real-world patterns.
A series of experiments robustly display our method's consistent superiority over existing state-of-the-art pre-training approaches.
arXiv Detail & Related papers (2024-06-17T07:43:53Z) - Brain3D: Generating 3D Objects from fMRI [76.41771117405973]
We design a novel 3D object representation learning method, Brain3D, that takes as input the fMRI data of a subject.
We show that our model captures the distinct functionalities of each region of human vision system.
Preliminary evaluations indicate that Brain3D can successfully identify the disordered brain regions in simulated scenarios.
arXiv Detail & Related papers (2024-05-24T06:06:11Z) - Interpretable Spatio-Temporal Embedding for Brain Structural-Effective Network with Ordinary Differential Equation [56.34634121544929]
In this study, we first construct the brain-effective network via the dynamic causal model.
We then introduce an interpretable graph learning framework termed Spatio-Temporal Embedding ODE (STE-ODE)
This framework incorporates specifically designed directed node embedding layers, aiming at capturing the dynamic interplay between structural and effective networks.
arXiv Detail & Related papers (2024-05-21T20:37:07Z) - Solving the enigma: Deriving optimal explanations of deep networks [3.9584068556746246]
We propose a novel framework designed to enhance the explainability of deep networks.
Our framework integrates various explanations from established XAI methods and employs a non-explanation to construct an optimal explanation.
Our results suggest that optimal explanations based on specific criteria are derivable.
arXiv Detail & Related papers (2024-05-16T11:49:08Z) - Neuro-Vision to Language: Enhancing Brain Recording-based Visual Reconstruction and Language Interaction [8.63068449082585]
Decoding non-invasive brain recordings is pivotal for advancing our understanding of human cognition.
Our framework integrates 3D brain structures with visual semantics using a Vision Transformer 3D.
We have enhanced the fMRI dataset with diverse fMRI-image-related textual data to support multimodal large model development.
arXiv Detail & Related papers (2024-04-30T10:41:23Z) - MindBridge: A Cross-Subject Brain Decoding Framework [60.58552697067837]
Brain decoding aims to reconstruct stimuli from acquired brain signals.
Currently, brain decoding is confined to a per-subject-per-model paradigm.
We present MindBridge, that achieves cross-subject brain decoding by employing only one model.
arXiv Detail & Related papers (2024-04-11T15:46:42Z) - DUE: Dynamic Uncertainty-Aware Explanation Supervision via 3D Imputation [12.96084790953902]
We propose a Dynamic Uncertainty-aware Explanation supervision (DUE) framework for 3D explanation supervision.
Our proposed framework is validated through comprehensive experiments on diverse real-world medical imaging datasets.
arXiv Detail & Related papers (2024-03-16T06:49:32Z) - UniBrain: Universal Brain MRI Diagnosis with Hierarchical
Knowledge-enhanced Pre-training [66.16134293168535]
We propose a hierarchical knowledge-enhanced pre-training framework for the universal brain MRI diagnosis, termed as UniBrain.
Specifically, UniBrain leverages a large-scale dataset of 24,770 imaging-report pairs from routine diagnostics.
arXiv Detail & Related papers (2023-09-13T09:22:49Z) - Multi-task Collaborative Pre-training and Individual-adaptive-tokens
Fine-tuning: A Unified Framework for Brain Representation Learning [3.1453938549636185]
We propose a unified framework that combines Collaborative pre-training and Individual--Tokens fine-tuning.
The proposed MCIAT achieves state-of-the-art diagnosis performance on the ADHD-200 dataset.
arXiv Detail & Related papers (2023-06-20T08:38:17Z) - NeuroExplainer: Fine-Grained Attention Decoding to Uncover Cortical
Development Patterns of Preterm Infants [73.85768093666582]
We propose an explainable geometric deep network dubbed NeuroExplainer.
NeuroExplainer is used to uncover altered infant cortical development patterns associated with preterm birth.
arXiv Detail & Related papers (2023-01-01T12:48:12Z) - Detecting Schizophrenia with 3D Structural Brain MRI Using Deep Learning [12.128463028063146]
Schizophrenia is a chronic neuropsychiatric disorder that causes distinct structural alterations within the brain.
Deep learning is capable of almost perfectly distinguishing schizophrenia patients from healthy controls on unseen structural MRI scans.
Subcortical regions and ventricles are the most predictive brain regions.
arXiv Detail & Related papers (2022-06-26T21:44:33Z) - An explainability framework for cortical surface-based deep learning [110.83289076967895]
We develop a framework for cortical surface-based deep learning.
First, we adapted a perturbation-based approach for use with surface data.
We show that our explainability framework is not only able to identify important features and their spatial location but that it is also reliable and valid.
arXiv Detail & Related papers (2022-03-15T23:16:49Z) - Deep Learning Identifies Neuroimaging Signatures of Alzheimer's Disease
Using Structural and Synthesized Functional MRI Data [8.388888908045406]
We propose a potential solution by first learning a structural-to-functional transformation in brain MRI.
We then synthesize spatially matched functional images from large-scale structural scans.
We identify the temporal lobe to be the most predictive structural-region and the parieto-occipital lobe to be the most predictive functional-region of our model.
arXiv Detail & Related papers (2021-04-10T03:16:33Z) - Joint Supervised and Self-Supervised Learning for 3D Real-World
Challenges [16.328866317851187]
Point cloud processing and 3D shape understanding are challenging tasks for which deep learning techniques have demonstrated great potentials.
Here we consider several possible scenarios involving synthetic and real-world point clouds where supervised learning fails due to data scarcity and large domain gaps.
We propose to enrich standard feature representations by leveraging self-supervision through a multi-task model that can solve a 3D puzzle while learning the main task of shape classification or part segmentation.
arXiv Detail & Related papers (2020-04-15T23:34:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.