Enhancing Hierarchical Transformers for Whole Brain Segmentation with Intracranial Measurements Integration
- URL: http://arxiv.org/abs/2309.04071v2
- Date: Wed, 10 Apr 2024 21:09:15 GMT
- Title: Enhancing Hierarchical Transformers for Whole Brain Segmentation with Intracranial Measurements Integration
- Authors: Xin Yu, Yucheng Tang, Qi Yang, Ho Hin Lee, Shunxing Bao, Yuankai Huo, Bennett A. Landman,
- Abstract summary: We enhance UNesT for whole brain segmentation to segment whole brain with 133 classes and TICV/PFV labels simultaneously.
We show that our model is able to conduct precise TICV/PFV estimation while maintaining the 132 brain regions performance at a comparable level.
- Score: 22.34212938866075
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Whole brain segmentation with magnetic resonance imaging (MRI) enables the non-invasive measurement of brain regions, including total intracranial volume (TICV) and posterior fossa volume (PFV). Enhancing the existing whole brain segmentation methodology to incorporate intracranial measurements offers a heightened level of comprehensiveness in the analysis of brain structures. Despite its potential, the task of generalizing deep learning techniques for intracranial measurements faces data availability constraints due to limited manually annotated atlases encompassing whole brain and TICV/PFV labels. In this paper, we enhancing the hierarchical transformer UNesT for whole brain segmentation to achieve segmenting whole brain with 133 classes and TICV/PFV simultaneously. To address the problem of data scarcity, the model is first pretrained on 4859 T1-weighted (T1w) 3D volumes sourced from 8 different sites. These volumes are processed through a multi-atlas segmentation pipeline for label generation, while TICV/PFV labels are unavailable. Subsequently, the model is finetuned with 45 T1w 3D volumes from Open Access Series Imaging Studies (OASIS) where both 133 whole brain classes and TICV/PFV labels are available. We evaluate our method with Dice similarity coefficients(DSC). We show that our model is able to conduct precise TICV/PFV estimation while maintaining the 132 brain regions performance at a comparable level. Code and trained model are available at: https://github.com/MASILab/UNesT/tree/main/wholebrainSeg.
Related papers
- Brain Tumor Classification on MRI in Light of Molecular Markers [61.77272414423481]
Co-deletion of the 1p/19q gene is associated with clinical outcomes in low-grade gliomas.
This study aims to utilize a specially MRI-based convolutional neural network for brain cancer detection.
arXiv Detail & Related papers (2024-09-29T07:04:26Z) - RadGenome-Chest CT: A Grounded Vision-Language Dataset for Chest CT Analysis [56.57177181778517]
RadGenome-Chest CT is a large-scale, region-guided 3D chest CT interpretation dataset based on CT-RATE.
We leverage the latest powerful universal segmentation and large language models to extend the original datasets.
arXiv Detail & Related papers (2024-04-25T17:11:37Z) - Automated deep learning segmentation of high-resolution 7 T postmortem
MRI for quantitative analysis of structure-pathology correlations in
neurodegenerative diseases [33.191270998887326]
We present a high resolution of 135 postmortem human brain tissue specimens imaged at 0.3 mm$3$ isotropic using a T2w sequence on a 7T whole-body MRI scanner.
We show generalizing capabilities across whole brain hemispheres in different specimens, and also on unseen images acquired at 0.28 mm3 and 0.16 mm3 isotropic T2*w FLASH sequence at 7T.
arXiv Detail & Related papers (2023-03-21T23:44:02Z) - Patched Diffusion Models for Unsupervised Anomaly Detection in Brain MRI [55.78588835407174]
We propose a method that reformulates the generation task of diffusion models as a patch-based estimation of healthy brain anatomy.
We evaluate our approach on data of tumors and multiple sclerosis lesions and demonstrate a relative improvement of 25.1% compared to existing baselines.
arXiv Detail & Related papers (2023-03-07T09:40:22Z) - Fighting the scanner effect in brain MRI segmentation with a progressive
level-of-detail network trained on multi-site data [1.6379393441314491]
LOD-Brain is a 3D convolutional neural network with progressive levels-of-detail able to segment brain data from any site.
It produces state-of-the-art results, with no significant difference in performance between internal and external sites.
Its portability opens the way for large scale application across different healthcare institutions, patient populations, and imaging technology manufacturers.
arXiv Detail & Related papers (2022-11-04T12:15:18Z) - Superficial White Matter Analysis: An Efficient Point-cloud-based Deep
Learning Framework with Supervised Contrastive Learning for Consistent
Tractography Parcellation across Populations and dMRI Acquisitions [68.41088365582831]
White matter parcellation classifies tractography streamlines into clusters or anatomically meaningful tracts.
Most parcellation methods focus on the deep white matter (DWM), whereas fewer methods address the superficial white matter (SWM) due to its complexity.
We propose a novel two-stage deep-learning-based framework, Superficial White Matter Analysis (SupWMA), that performs an efficient parcellation of 198 SWM clusters from whole-brain tractography.
arXiv Detail & Related papers (2022-07-18T23:07:53Z) - Building Brains: Subvolume Recombination for Data Augmentation in Large
Vessel Occlusion Detection [56.67577446132946]
A large training data set is required for a standard deep learning-based model to learn this strategy from data.
We propose an augmentation method that generates artificial training samples by recombining vessel tree segmentations of the hemispheres from different patients.
In line with the augmentation scheme, we use a 3D-DenseNet fed with task-specific input, fostering a side-by-side comparison between the hemispheres.
arXiv Detail & Related papers (2022-05-05T10:31:57Z) - 3-Dimensional Deep Learning with Spatial Erasing for Unsupervised
Anomaly Segmentation in Brain MRI [55.97060983868787]
We investigate whether using increased spatial context by using MRI volumes combined with spatial erasing leads to improved unsupervised anomaly segmentation performance.
We compare 2D variational autoencoder (VAE) to their 3D counterpart, propose 3D input erasing, and systemically study the impact of the data set size on the performance.
Our best performing 3D VAE with input erasing leads to an average DICE score of 31.40% compared to 25.76% for the 2D VAE.
arXiv Detail & Related papers (2021-09-14T09:17:27Z) - LIFE: A Generalizable Autodidactic Pipeline for 3D OCT-A Vessel
Segmentation [5.457168581192045]
Recent deep learning algorithms produced promising vascular segmentation results.
However, 3D retinal vessel segmentation remains difficult due to the lack of manually annotated training data.
We propose a learning-based method that is only supervised by a self-synthesized modality.
arXiv Detail & Related papers (2021-07-09T07:51:33Z) - HI-Net: Hyperdense Inception 3D UNet for Brain Tumor Segmentation [17.756591105686]
This paper proposes hyperdense inception 3D UNet (HI-Net), which captures multi-scale information by stacking factorization of 3D weighted convolutional layers in the residual inception block.
Preliminary results on the BRATS 2020 testing set show that achieved by our proposed approach, the dice (DSC) scores of ET, WT, and TC are 0.79457, 0.87494, and 0.83712, respectively.
arXiv Detail & Related papers (2020-12-12T09:09:04Z) - Transfer Learning for Brain Tumor Segmentation [0.6408773096179187]
Gliomas are the most common malignant brain tumors that are treated with chemoradiotherapy and surgery.
Recent advances in deep learning have led to convolutional neural network architectures that excel at various visual recognition tasks.
In this work, we construct FCNs with pretrained convolutional encoders. We show that we can stabilize the training process this way and achieve an improvement with respect to dice scores and Hausdorff distances.
arXiv Detail & Related papers (2019-12-28T12:45:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.