MixNet: Multi-modality Mix Network for Brain Segmentation
- URL: http://arxiv.org/abs/2004.09832v1
- Date: Tue, 21 Apr 2020 08:55:55 GMT
- Title: MixNet: Multi-modality Mix Network for Brain Segmentation
- Authors: Long Chen, Dorit Merhof
- Abstract summary: MixNet is a 2D semantic-wise deep convolutional neural network to segment brain structure in MRI images.
MixNetv2 was submitted to the MRBrainS challenge at MICCAI 2018 and won the 3rd place in the 3-label task.
- Score: 8.44876865136712
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automated brain structure segmentation is important to many clinical
quantitative analysis and diagnoses. In this work, we introduce MixNet, a 2D
semantic-wise deep convolutional neural network to segment brain structure in
multi-modality MRI images. The network is composed of our modified deep
residual learning units. In the unit, we replace the traditional convolution
layer with the dilated convolutional layer, which avoids the use of pooling
layers and deconvolutional layers, reducing the number of network parameters.
Final predictions are made by aggregating information from multiple scales and
modalities. A pyramid pooling module is used to capture spatial information of
the anatomical structures at the output end. In addition, we test three
architectures (MixNetv1, MixNetv2 and MixNetv3) which fuse the modalities
differently to see the effect on the results. Our network achieves the
state-of-the-art performance. MixNetv2 was submitted to the MRBrainS challenge
at MICCAI 2018 and won the 3rd place in the 3-label task. On the MRBrainS2018
dataset, which includes subjects with a variety of pathologies, the overall DSC
(Dice Coefficient) of 84.7% (gray matter), 87.3% (white matter) and 83.4%
(cerebrospinal fluid) were obtained with only 7 subjects as training data.
Related papers
- Learning Multimodal Volumetric Features for Large-Scale Neuron Tracing [72.45257414889478]
We aim to reduce human workload by predicting connectivity between over-segmented neuron pieces.
We first construct a dataset, named FlyTracing, that contains millions of pairwise connections of segments expanding the whole fly brain.
We propose a novel connectivity-aware contrastive learning method to generate dense volumetric EM image embedding.
arXiv Detail & Related papers (2024-01-05T19:45:12Z) - Multi-pooling 3D Convolutional Neural Network for fMRI Classification of
Visual Brain States [3.19429184376611]
This paper proposed a multi-pooling 3D convolutional neural network (MP3DCNN) to improve fMRI classification accuracy.
MP3DCNN is mainly composed of a three-layer 3DCNN, where the first and second layers of 3D convolutions each have a branch of pooling connection.
arXiv Detail & Related papers (2023-03-25T07:54:51Z) - Dual Multi-scale Mean Teacher Network for Semi-supervised Infection
Segmentation in Chest CT Volume for COVID-19 [76.51091445670596]
Automated detecting lung infections from computed tomography (CT) data plays an important role for combating COVID-19.
Most current COVID-19 infection segmentation methods mainly relied on 2D CT images, which lack 3D sequential constraint.
Existing 3D CT segmentation methods focus on single-scale representations, which do not achieve the multiple level receptive field sizes on 3D volume.
arXiv Detail & Related papers (2022-11-10T13:11:21Z) - UNet#: A UNet-like Redesigning Skip Connections for Medical Image
Segmentation [13.767615201220138]
We propose a novel network structure combining dense skip connections and full-scale skip connections, named UNet-sharp (UNet#) for its shape similar to symbol #.
The proposed UNet# can aggregate feature maps of different scales in the decoder sub-network and capture fine-grained details and coarse-grained semantics from the full scale.
arXiv Detail & Related papers (2022-05-24T03:40:48Z) - Two-Stream Graph Convolutional Network for Intra-oral Scanner Image
Segmentation [133.02190910009384]
We propose a two-stream graph convolutional network (i.e., TSGCN) to handle inter-view confusion between different raw attributes.
Our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.
arXiv Detail & Related papers (2022-04-19T10:41:09Z) - TSGCNet: Discriminative Geometric Feature Learning with Two-Stream
GraphConvolutional Network for 3D Dental Model Segmentation [141.2690520327948]
We propose a two-stream graph convolutional network (TSGCNet) to learn multi-view information from different geometric attributes.
We evaluate our proposed TSGCNet on a real-patient dataset of dental models acquired by 3D intraoral scanners.
arXiv Detail & Related papers (2020-12-26T08:02:56Z) - HI-Net: Hyperdense Inception 3D UNet for Brain Tumor Segmentation [17.756591105686]
This paper proposes hyperdense inception 3D UNet (HI-Net), which captures multi-scale information by stacking factorization of 3D weighted convolutional layers in the residual inception block.
Preliminary results on the BRATS 2020 testing set show that achieved by our proposed approach, the dice (DSC) scores of ET, WT, and TC are 0.79457, 0.87494, and 0.83712, respectively.
arXiv Detail & Related papers (2020-12-12T09:09:04Z) - DoDNet: Learning to segment multi-organ and tumors from multiple
partially labeled datasets [102.55303521877933]
We propose a dynamic on-demand network (DoDNet) that learns to segment multiple organs and tumors on partially labelled datasets.
DoDNet consists of a shared encoder-decoder architecture, a task encoding module, a controller for generating dynamic convolution filters, and a single but dynamic segmentation head.
arXiv Detail & Related papers (2020-11-20T04:56:39Z) - Multi-modal segmentation of 3D brain scans using neural networks [0.0]
Deep convolutional neural networks are trained to segment 3D MRI (MPRAGE, DWI, FLAIR) and CT scans.
segmentation quality is quantified using the Dice metric for a total of 27 anatomical structures.
arXiv Detail & Related papers (2020-08-11T09:13:54Z) - M2Net: Multi-modal Multi-channel Network for Overall Survival Time
Prediction of Brain Tumor Patients [151.4352001822956]
Early and accurate prediction of overall survival (OS) time can help to obtain better treatment planning for brain tumor patients.
Existing prediction methods rely on radiomic features at the local lesion area of a magnetic resonance (MR) volume.
We propose an end-to-end OS time prediction model; namely, Multi-modal Multi-channel Network (M2Net)
arXiv Detail & Related papers (2020-06-01T05:21:37Z) - DeepSeg: Deep Neural Network Framework for Automatic Brain Tumor
Segmentation using Magnetic Resonance FLAIR Images [0.0]
Gliomas are the most common and aggressive type of brain tumors.
Fluid-Attenuated Inversion Recovery (FLAIR) MRI can provide the physician with information about tumor infiltration.
This paper proposes a new generic deep learning architecture; namely DeepSeg for fully automated detection and segmentation of the brain lesion.
arXiv Detail & Related papers (2020-04-26T09:50:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.