Gray Matter Segmentation in Ultra High Resolution 7 Tesla ex vivo T2w
MRI of Human Brain Hemispheres
- URL: http://arxiv.org/abs/2110.07711v1
- Date: Thu, 14 Oct 2021 21:01:18 GMT
- Title: Gray Matter Segmentation in Ultra High Resolution 7 Tesla ex vivo T2w
MRI of Human Brain Hemispheres
- Authors: Pulkit Khandelwal, Shokufeh Sadaghiani, Sadhana Ravikumar, Sydney Lim,
Sanaz Arezoumandan, Claire Peterson, Eunice Chung, Madigan Bedard, Noah Capp,
Ranjit Ittyerah, Elyse Migdal, Grace Choi, Emily Kopp, Bridget Loja, Eusha
Hasan, Jiacheng Li, Karthik Prabhakaran, Gabor Mizsei, Marianna Gabrielyan,
Theresa Schuck, John Robinson, Daniel Ohm, Edward Lee, John Q. Trojanowski,
Corey McMillan, Murray Grossman, David Irwin, M. Dylan Tisdall, Sandhitsu R.
Das, Laura E.M. Wisse, David A. Wolk, Paul A. Yushkevich
- Abstract summary: We present a high resolution 7 Tesla dataset of 32 ex vivo human brain specimens.
We benchmark the cortical mantle segmentation performance of nine neural network architectures.
We show excellent generalizing capabilities across whole brain hemispheres in different specimens, and also on unseen images acquired at different magnetic field strength and imaging sequences.
- Score: 9.196429840458629
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Ex vivo MRI of the brain provides remarkable advantages over in vivo MRI for
visualizing and characterizing detailed neuroanatomy. However, automated
cortical segmentation methods in ex vivo MRI are not well developed, primarily
due to limited availability of labeled datasets, and heterogeneity in scanner
hardware and acquisition protocols. In this work, we present a high resolution
7 Tesla dataset of 32 ex vivo human brain specimens. We benchmark the cortical
mantle segmentation performance of nine neural network architectures, trained
and evaluated using manually-segmented 3D patches sampled from specific
cortical regions, and show excellent generalizing capabilities across whole
brain hemispheres in different specimens, and also on unseen images acquired at
different magnetic field strength and imaging sequences. Finally, we provide
cortical thickness measurements across key regions in 3D ex vivo human brain
images. Our code and processed datasets are publicly available at
https://github.com/Pulkit-Khandelwal/picsl-ex-vivo-segmentation.
Related papers
- fMRI-3D: A Comprehensive Dataset for Enhancing fMRI-based 3D Reconstruction [50.534007259536715]
We present the fMRI-3D dataset, which includes data from 15 participants and showcases a total of 4768 3D objects.
We propose MinD-3D, a novel framework designed to decode 3D visual information from fMRI signals.
arXiv Detail & Related papers (2024-09-17T16:13:59Z) - Brain3D: Generating 3D Objects from fMRI [76.41771117405973]
We design a novel 3D object representation learning method, Brain3D, that takes as input the fMRI data of a subject.
We show that our model captures the distinct functionalities of each region of human vision system.
Preliminary evaluations indicate that Brain3D can successfully identify the disordered brain regions in simulated scenarios.
arXiv Detail & Related papers (2024-05-24T06:06:11Z) - Surface-based parcellation and vertex-wise analysis of ultra high-resolution ex vivo 7 tesla MRI in Alzheimer's disease and related dementias [32.61675068837929]
We present one-of-its-kind dataset of 82 ex vivo T2w whole brain hemispheres MRI at 0.3 mm isotropic resolution spanning Alzheimer's disease and related dementias.
We adapted and developed a fast and easy-to-use automated surface-based pipeline to parcellate, for the first time, ultra high-resolution ex vivo brain tissue at the native subject space resolution using the Desikan-Killiany-Tourville (DKT) brain atlas.
arXiv Detail & Related papers (2024-03-28T15:27:34Z) - H-SynEx: Using synthetic images and ultra-high resolution ex vivo MRI for hypothalamus subregion segmentation [1.0486773259892048]
We introduce H-SynEx, a machine learning method for automated segmentation of hypothalamic subregions.
H-SynEx generalizes across different MRI sequences and resolutions without retraining.
Our method was able to discriminate controls versus Alzheimer's Disease patients on FLAIR images with 5mm spacing.
arXiv Detail & Related papers (2024-01-30T15:36:02Z) - Automated deep learning segmentation of high-resolution 7 T postmortem
MRI for quantitative analysis of structure-pathology correlations in
neurodegenerative diseases [33.191270998887326]
We present a high resolution of 135 postmortem human brain tissue specimens imaged at 0.3 mm$3$ isotropic using a T2w sequence on a 7T whole-body MRI scanner.
We show generalizing capabilities across whole brain hemispheres in different specimens, and also on unseen images acquired at 0.28 mm3 and 0.16 mm3 isotropic T2*w FLASH sequence at 7T.
arXiv Detail & Related papers (2023-03-21T23:44:02Z) - gACSON software for automated segmentation and morphology analyses of
myelinated axons in 3D electron microscopy [55.78588835407174]
We introduce a freely available gACSON software for visualization, segmentation, assessment, and morphology analysis of myelinated axons in 3D-EM volumes.
gACSON automatically segments the intra-axonal space of myelinated axons and their corresponding myelin sheaths.
It analyzes the morphology of myelinated axons, such as axonal diameter, axonal eccentricity, myelin thickness, or g-ratio.
arXiv Detail & Related papers (2021-12-13T08:17:15Z) - Overcoming the Domain Gap in Neural Action Representations [60.47807856873544]
3D pose data can now be reliably extracted from multi-view video sequences without manual intervention.
We propose to use it to guide the encoding of neural action representations together with a set of neural and behavioral augmentations.
To reduce the domain gap, during training, we swap neural and behavioral data across animals that seem to be performing similar actions.
arXiv Detail & Related papers (2021-12-02T12:45:46Z) - 3-Dimensional Deep Learning with Spatial Erasing for Unsupervised
Anomaly Segmentation in Brain MRI [55.97060983868787]
We investigate whether using increased spatial context by using MRI volumes combined with spatial erasing leads to improved unsupervised anomaly segmentation performance.
We compare 2D variational autoencoder (VAE) to their 3D counterpart, propose 3D input erasing, and systemically study the impact of the data set size on the performance.
Our best performing 3D VAE with input erasing leads to an average DICE score of 31.40% compared to 25.76% for the 2D VAE.
arXiv Detail & Related papers (2021-09-14T09:17:27Z) - Brain Tumor Segmentation and Survival Prediction using 3D Attention UNet [11.961432794560103]
We develop an attention convolutional neural network (CNN) to segment brain tumors from Magnetic Resonance Images (MRI)
We predict the survival rate using various machine learning methods.
For survival prediction, we extract some novel radiomic features based on geometry, location, the shape of the segmented tumor and combine them with clinical information to estimate the survival duration for each patient.
arXiv Detail & Related papers (2021-04-02T11:04:40Z) - 3D Reconstruction and Segmentation of Dissection Photographs for
MRI-free Neuropathology [2.4984854046383624]
We present methodology to reconstruct and segment full brain image volumes from brain dissection photographs.
The 3D reconstruction is achieved via a joint registration framework, which uses a reference volume other than MRI.
We evaluate our methods on a dataset with 24 brains, using Dice scores and volume correlations.
arXiv Detail & Related papers (2020-09-11T18:21:00Z) - Interpretation of 3D CNNs for Brain MRI Data Classification [56.895060189929055]
We extend the previous findings in gender differences from diffusion-tensor imaging on T1 brain MRI scans.
We provide the voxel-wise 3D CNN interpretation comparing the results of three interpretation methods.
arXiv Detail & Related papers (2020-06-20T17:56:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.