An explainability framework for cortical surface-based deep learning
- URL: http://arxiv.org/abs/2203.08312v1
- Date: Tue, 15 Mar 2022 23:16:49 GMT
- Title: An explainability framework for cortical surface-based deep learning
- Authors: Fernanda L. Ribeiro, Steffen Bollmann, Ross Cunnington, and Alexander
M. Puckett
- Abstract summary: We develop a framework for cortical surface-based deep learning.
First, we adapted a perturbation-based approach for use with surface data.
We show that our explainability framework is not only able to identify important features and their spatial location but that it is also reliable and valid.
- Score: 110.83289076967895
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The emergence of explainability methods has enabled a better comprehension of
how deep neural networks operate through concepts that are easily understood
and implemented by the end user. While most explainability methods have been
designed for traditional deep learning, some have been further developed for
geometric deep learning, in which data are predominantly represented as graphs.
These representations are regularly derived from medical imaging data,
particularly in the field of neuroimaging, in which graphs are used to
represent brain structural and functional wiring patterns (brain connectomes)
and cortical surface models are used to represent the anatomical structure of
the brain. Although explainability techniques have been developed for
identifying important vertices (brain areas) and features for graph
classification, these methods are still lacking for more complex tasks, such as
surface-based modality transfer (or vertex-wise regression). Here, we address
the need for surface-based explainability approaches by developing a framework
for cortical surface-based deep learning, providing a transparent system for
modality transfer tasks. First, we adapted a perturbation-based approach for
use with surface data. Then, we applied our perturbation-based method to
investigate the key features and vertices used by a geometric deep learning
model developed to predict brain function from anatomy directly on a cortical
surface model. We show that our explainability framework is not only able to
identify important features and their spatial location but that it is also
reliable and valid.
Related papers
- Graph Neural Networks for Brain Graph Learning: A Survey [53.74244221027981]
Graph neural networks (GNNs) have demonstrated a significant advantage in mining graph-structured data.
GNNs to learn brain graph representations for brain disorder analysis has recently gained increasing attention.
In this paper, we aim to bridge this gap by reviewing brain graph learning works that utilize GNNs.
arXiv Detail & Related papers (2024-06-01T02:47:39Z) - Graph Neural Operators for Classification of Spatial Transcriptomics
Data [1.408706290287121]
We propose a study incorporating various graph neural network approaches to validate the efficacy of applying neural operators towards prediction of brain regions in mouse brain tissue samples.
We were able to achieve an F1 score of nearly 72% for the graph neural operator approach which outperformed all baseline and other graph network approaches.
arXiv Detail & Related papers (2023-02-01T18:32:06Z) - Functional2Structural: Cross-Modality Brain Networks Representation
Learning [55.24969686433101]
Graph mining on brain networks may facilitate the discovery of novel biomarkers for clinical phenotypes and neurodegenerative diseases.
We propose a novel graph learning framework, known as Deep Signed Brain Networks (DSBN), with a signed graph encoder.
We validate our framework on clinical phenotype and neurodegenerative disease prediction tasks using two independent, publicly available datasets.
arXiv Detail & Related papers (2022-05-06T03:45:36Z) - Feature visualization for convolutional neural network models trained on
neuroimaging data [0.0]
We show for the first time results using feature visualization of convolutional neural networks (CNNs)
We have trained CNNs for different tasks including sex classification and artificial lesion classification based on structural magnetic resonance imaging (MRI) data.
The resulting images reveal the learned concepts of the artificial lesions, including their shapes, but remain hard to interpret for abstract features in the sex classification task.
arXiv Detail & Related papers (2022-03-24T15:24:38Z) - Self-Supervised Graph Representation Learning for Neuronal Morphologies [75.38832711445421]
We present GraphDINO, a data-driven approach to learn low-dimensional representations of 3D neuronal morphologies from unlabeled datasets.
We show, in two different species and across multiple brain areas, that this method yields morphological cell type clusterings on par with manual feature-based classification by experts.
Our method could potentially enable data-driven discovery of novel morphological features and cell types in large-scale datasets.
arXiv Detail & Related papers (2021-12-23T12:17:47Z) - Neuronal Learning Analysis using Cycle-Consistent Adversarial Networks [4.874780144224057]
We use a variant of deep generative models called - CycleGAN, to learn the unknown mapping between pre- and post-learning neural activities.
We develop an end-to-end pipeline to preprocess, train and evaluate calcium fluorescence signals, and a procedure to interpret the resulting deep learning models.
arXiv Detail & Related papers (2021-11-25T13:24:19Z) - Generalized Shape Metrics on Neural Representations [26.78835065137714]
We provide a family of metric spaces that quantify representational dissimilarity.
We modify existing representational similarity measures based on canonical correlation analysis to satisfy the triangle inequality.
We identify relationships between neural representations that are interpretable in terms of anatomical features and model performance.
arXiv Detail & Related papers (2021-10-27T19:48:55Z) - Graph-Based Deep Learning for Medical Diagnosis and Analysis: Past,
Present and Future [36.58189530598098]
It has become critical to explore how machine learning and specifically deep learning methods can be exploited to analyse healthcare data.
A major limitation of existing methods has been the focus on grid-like data.
graph neural networks have attracted significant attention by exploiting implicit information that resides in a biological system.
arXiv Detail & Related papers (2021-05-27T13:32:45Z) - A neural anisotropic view of underspecification in deep learning [60.119023683371736]
We show that the way neural networks handle the underspecification of problems is highly dependent on the data representation.
Our results highlight that understanding the architectural inductive bias in deep learning is fundamental to address the fairness, robustness, and generalization of these systems.
arXiv Detail & Related papers (2021-04-29T14:31:09Z) - Neural Topological SLAM for Visual Navigation [112.73876869904]
We design topological representations for space that leverage semantics and afford approximate geometric reasoning.
We describe supervised learning-based algorithms that can build, maintain and use such representations under noisy actuation.
arXiv Detail & Related papers (2020-05-25T17:56:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.