A Multi-View Dynamic Fusion Framework: How to Improve the Multimodal
Brain Tumor Segmentation from Multi-Views?
- URL: http://arxiv.org/abs/2012.11211v1
- Date: Mon, 21 Dec 2020 09:45:23 GMT
- Title: A Multi-View Dynamic Fusion Framework: How to Improve the Multimodal
Brain Tumor Segmentation from Multi-Views?
- Authors: Yi Ding, Wei Zheng, Guozheng Wu, Ji Geng, Mingsheng Cao, Zhiguang Qin
- Abstract summary: This paper proposes a multi-view dynamic fusion framework to improve the performance of brain tumor segmentation.
By evaluating the proposed framework on BRATS 2015 and BRATS 2018, it can be found that the fusion results from multi-views achieve a better performance than the segmentation result from the single view.
- Score: 5.793853101758628
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: When diagnosing the brain tumor, doctors usually make a diagnosis by
observing multimodal brain images from the axial view, the coronal view and the
sagittal view, respectively. And then they make a comprehensive decision to
confirm the brain tumor based on the information obtained from multi-views.
Inspired by this diagnosing process and in order to further utilize the 3D
information hidden in the dataset, this paper proposes a multi-view dynamic
fusion framework to improve the performance of brain tumor segmentation. The
proposed framework consists of 1) a multi-view deep neural network
architecture, which represents multi learning networks for segmenting the brain
tumor from different views and each deep neural network corresponds to
multi-modal brain images from one single view and 2) the dynamic decision
fusion method, which is mainly used to fuse segmentation results from
multi-views as an integrate one and two different fusion methods, the voting
method and the weighted averaging method, have been adopted to evaluate the
fusing process. Moreover, the multi-view fusion loss, which consists of the
segmentation loss, the transition loss and the decision loss, is proposed to
facilitate the training process of multi-view learning networks so as to keep
the consistency of appearance and space, not only in the process of fusing
segmentation results, but also in the process of training the learning network.
\par By evaluating the proposed framework on BRATS 2015 and BRATS 2018, it can
be found that the fusion results from multi-views achieve a better performance
than the segmentation result from the single view and the effectiveness of
proposed multi-view fusion loss has also been proved. Moreover, the proposed
framework achieves a better segmentation performance and a higher efficiency
compared to other counterpart methods.
Related papers
- Deep Multimodal Fusion of Data with Heterogeneous Dimensionality via
Projective Networks [4.933439602197885]
We propose a novel deep learning-based framework for the fusion of multimodal data with heterogeneous dimensionality (e.g., 3D+2D)
The framework was validated on the following tasks: segmentation of geographic atrophy (GA), a late-stage manifestation of age-related macular degeneration, and segmentation of retinal blood vessels (RBV) in multimodal retinal imaging.
Our results show that the proposed method outperforms the state-of-the-art monomodal methods on GA and RBV segmentation by up to 3.10% and 4.64% Dice, respectively.
arXiv Detail & Related papers (2024-02-02T11:03:33Z) - Scale-aware Super-resolution Network with Dual Affinity Learning for
Lesion Segmentation from Medical Images [50.76668288066681]
We present a scale-aware super-resolution network to adaptively segment lesions of various sizes from low-resolution medical images.
Our proposed network achieved consistent improvements compared to other state-of-the-art methods.
arXiv Detail & Related papers (2023-05-30T14:25:55Z) - Multiclass MRI Brain Tumor Segmentation using 3D Attention-based U-Net [0.0]
This paper proposes a 3D attention-based U-Net architecture for multi-region segmentation of brain tumors.
The attention mechanism helps to improve segmentation accuracy by de-emphasizing healthy tissues and accentuating malignant tissues.
arXiv Detail & Related papers (2023-05-10T14:35:07Z) - Reliable Joint Segmentation of Retinal Edema Lesions in OCT Images [55.83984261827332]
In this paper, we propose a novel reliable multi-scale wavelet-enhanced transformer network.
We develop a novel segmentation backbone that integrates a wavelet-enhanced feature extractor network and a multi-scale transformer module.
Our proposed method achieves better segmentation accuracy with a high degree of reliability as compared to other state-of-the-art segmentation approaches.
arXiv Detail & Related papers (2022-12-01T07:32:56Z) - Evidence fusion with contextual discounting for multi-modality medical
image segmentation [22.77837744216949]
The framework is composed of an encoder-decoder feature extraction module, an evidential segmentation module that computes a belief function at each voxel for each modality, and a multi-modality evidence fusion module.
The method was evaluated on the BraTs 2021 database of 1251 patients with brain tumors.
arXiv Detail & Related papers (2022-06-23T14:36:50Z) - Cross-Modality Deep Feature Learning for Brain Tumor Segmentation [158.8192041981564]
This paper proposes a novel cross-modality deep feature learning framework to segment brain tumors from the multi-modality MRI data.
The core idea is to mine rich patterns across the multi-modality data to make up for the insufficient data scale.
Comprehensive experiments are conducted on the BraTS benchmarks, which show that the proposed cross-modality deep feature learning framework can effectively improve the brain tumor segmentation performance.
arXiv Detail & Related papers (2022-01-07T07:46:01Z) - Cross-Modality Brain Tumor Segmentation via Bidirectional
Global-to-Local Unsupervised Domain Adaptation [61.01704175938995]
In this paper, we propose a novel Bidirectional Global-to-Local (BiGL) adaptation framework under a UDA scheme.
Specifically, a bidirectional image synthesis and segmentation module is proposed to segment the brain tumor.
The proposed method outperforms several state-of-the-art unsupervised domain adaptation methods by a large margin.
arXiv Detail & Related papers (2021-05-17T10:11:45Z) - Representation Disentanglement for Multi-modal MR Analysis [15.498244253687337]
Recent works have suggested that multi-modal deep learning analysis can benefit from explicitly disentangling anatomical (shape) and modality (appearance) representations from the images.
We propose a margin loss that regularizes the similarity relationships of the representations across subjects and modalities.
To enable a robust training, we introduce a modified conditional convolution to design a single model for encoding images of all modalities.
arXiv Detail & Related papers (2021-02-23T02:08:38Z) - Robust Multimodal Brain Tumor Segmentation via Feature Disentanglement
and Gated Fusion [71.87627318863612]
We propose a novel multimodal segmentation framework which is robust to the absence of imaging modalities.
Our network uses feature disentanglement to decompose the input modalities into the modality-specific appearance code.
We validate our method on the important yet challenging multimodal brain tumor segmentation task with the BRATS challenge dataset.
arXiv Detail & Related papers (2020-02-22T14:32:04Z) - Unpaired Multi-modal Segmentation via Knowledge Distillation [77.39798870702174]
We propose a novel learning scheme for unpaired cross-modality image segmentation.
In our method, we heavily reuse network parameters, by sharing all convolutional kernels across CT and MRI.
We have extensively validated our approach on two multi-class segmentation problems.
arXiv Detail & Related papers (2020-01-06T20:03:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.