Whole Brain Segmentation with Full Volume Neural Network
- URL: http://arxiv.org/abs/2110.15601v1
- Date: Fri, 29 Oct 2021 08:00:14 GMT
- Title: Whole Brain Segmentation with Full Volume Neural Network
- Authors: Yeshu Li, Jonathan Cui, Yilun Sheng, Xiao Liang, Jingdong Wang, Eric
I-Chao Chang and Yan Xu
- Abstract summary: Whole brain segmentation is an important task that segments the whole brain volume into anatomically labeled regions-of-interest.
Existing solutions, usually segment the brain image by classifying the voxels, or labeling the slices or the sub-volumes separately.
We propose to adopt a full volume framework, which feeds the full volume brain image into the segmentation network and directly outputs the segmentation result for the whole brain volume.
- Score: 41.2566839481976
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Whole brain segmentation is an important neuroimaging task that segments the
whole brain volume into anatomically labeled regions-of-interest. Convolutional
neural networks have demonstrated good performance in this task. Existing
solutions, usually segment the brain image by classifying the voxels, or
labeling the slices or the sub-volumes separately. Their representation
learning is based on parts of the whole volume whereas their labeling result is
produced by aggregation of partial segmentation. Learning and inference with
incomplete information could lead to sub-optimal final segmentation result. To
address these issues, we propose to adopt a full volume framework, which feeds
the full volume brain image into the segmentation network and directly outputs
the segmentation result for the whole brain volume. The framework makes use of
complete information in each volume and can be implemented easily. An effective
instance in this framework is given subsequently. We adopt the $3$D
high-resolution network (HRNet) for learning spatially fine-grained
representations and the mixed precision training scheme for memory-efficient
training. Extensive experiment results on a publicly available $3$D MRI brain
dataset show that our proposed model advances the state-of-the-art methods in
terms of segmentation performance. Source code is publicly available at
https://github.com/microsoft/VoxHRNet.
Related papers
- Contextual Embedding Learning to Enhance 2D Networks for Volumetric Image Segmentation [5.995633685952995]
2D convolutional neural networks (CNNs) can hardly exploit the spatial correlation of volumetric data.
We propose a contextual embedding learning approach to facilitate 2D CNNs capturing spatial information properly.
Our approach leverages the learned embedding and the slice-wisely neighboring matching as a soft cue to guide the network.
arXiv Detail & Related papers (2024-04-02T08:17:39Z) - Learning Multimodal Volumetric Features for Large-Scale Neuron Tracing [72.45257414889478]
We aim to reduce human workload by predicting connectivity between over-segmented neuron pieces.
We first construct a dataset, named FlyTracing, that contains millions of pairwise connections of segments expanding the whole fly brain.
We propose a novel connectivity-aware contrastive learning method to generate dense volumetric EM image embedding.
arXiv Detail & Related papers (2024-01-05T19:45:12Z) - Unsupervised Segmentation of Fetal Brain MRI using Deep Learning
Cascaded Registration [2.494736313545503]
Traditional deep learning-based automatic segmentation requires extensive training data with ground-truth labels.
We propose a novel method based on multi-atlas segmentation, that accurately segments multiple tissues without relying on labeled data for training.
Our method employs a cascaded deep learning network for 3D image registration, which computes small, incremental deformations to the moving image to align it precisely with the fixed image.
arXiv Detail & Related papers (2023-07-07T13:17:12Z) - Self-Supervised Correction Learning for Semi-Supervised Biomedical Image
Segmentation [84.58210297703714]
We propose a self-supervised correction learning paradigm for semi-supervised biomedical image segmentation.
We design a dual-task network, including a shared encoder and two independent decoders for segmentation and lesion region inpainting.
Experiments on three medical image segmentation datasets for different tasks demonstrate the outstanding performance of our method.
arXiv Detail & Related papers (2023-01-12T08:19:46Z) - Learning from partially labeled data for multi-organ and tumor
segmentation [102.55303521877933]
We propose a Transformer based dynamic on-demand network (TransDoDNet) that learns to segment organs and tumors on multiple datasets.
A dynamic head enables the network to accomplish multiple segmentation tasks flexibly.
We create a large-scale partially labeled Multi-Organ and Tumor benchmark, termed MOTS, and demonstrate the superior performance of our TransDoDNet over other competitors.
arXiv Detail & Related papers (2022-11-13T13:03:09Z) - EVC-Net: Multi-scale V-Net with Conditional Random Fields for Brain
Extraction [3.4376560669160394]
EVC-Net adds lower scale inputs on each encoder block.
Conditional Random Fields are re-introduced here as an additional step for refining the network's output.
Results show that even with limited training resources, EVC-Net achieves higher Dice Coefficient and Jaccard Index.
arXiv Detail & Related papers (2022-06-06T18:21:21Z) - UNet#: A UNet-like Redesigning Skip Connections for Medical Image
Segmentation [13.767615201220138]
We propose a novel network structure combining dense skip connections and full-scale skip connections, named UNet-sharp (UNet#) for its shape similar to symbol #.
The proposed UNet# can aggregate feature maps of different scales in the decoder sub-network and capture fine-grained details and coarse-grained semantics from the full scale.
arXiv Detail & Related papers (2022-05-24T03:40:48Z) - Generalized Organ Segmentation by Imitating One-shot Reasoning using
Anatomical Correlation [55.1248480381153]
We propose OrganNet which learns a generalized organ concept from a set of annotated organ classes and then transfer this concept to unseen classes.
We show that OrganNet can effectively resist the wide variations in organ morphology and produce state-of-the-art results in one-shot segmentation task.
arXiv Detail & Related papers (2021-03-30T13:41:12Z) - DoDNet: Learning to segment multi-organ and tumors from multiple
partially labeled datasets [102.55303521877933]
We propose a dynamic on-demand network (DoDNet) that learns to segment multiple organs and tumors on partially labelled datasets.
DoDNet consists of a shared encoder-decoder architecture, a task encoding module, a controller for generating dynamic convolution filters, and a single but dynamic segmentation head.
arXiv Detail & Related papers (2020-11-20T04:56:39Z) - Transfer Learning for Brain Tumor Segmentation [0.6408773096179187]
Gliomas are the most common malignant brain tumors that are treated with chemoradiotherapy and surgery.
Recent advances in deep learning have led to convolutional neural network architectures that excel at various visual recognition tasks.
In this work, we construct FCNs with pretrained convolutional encoders. We show that we can stabilize the training process this way and achieve an improvement with respect to dice scores and Hausdorff distances.
arXiv Detail & Related papers (2019-12-28T12:45:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.