Evidence fusion with contextual discounting for multi-modality medical
image segmentation
- URL: http://arxiv.org/abs/2206.11739v2
- Date: Mon, 27 Jun 2022 09:04:07 GMT
- Title: Evidence fusion with contextual discounting for multi-modality medical
image segmentation
- Authors: Ling Huang, Thierry Denoeux, Pierre Vera, Su Ruan
- Abstract summary: The framework is composed of an encoder-decoder feature extraction module, an evidential segmentation module that computes a belief function at each voxel for each modality, and a multi-modality evidence fusion module.
The method was evaluated on the BraTs 2021 database of 1251 patients with brain tumors.
- Score: 22.77837744216949
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As information sources are usually imperfect, it is necessary to take into
account their reliability in multi-source information fusion tasks. In this
paper, we propose a new deep framework allowing us to merge multi-MR image
segmentation results using the formalism of Dempster-Shafer theory while taking
into account the reliability of different modalities relative to different
classes. The framework is composed of an encoder-decoder feature extraction
module, an evidential segmentation module that computes a belief function at
each voxel for each modality, and a multi-modality evidence fusion module,
which assigns a vector of discount rates to each modality evidence and combines
the discounted evidence using Dempster's rule. The whole framework is trained
by minimizing a new loss function based on a discounted Dice index to increase
segmentation accuracy and reliability. The method was evaluated on the BraTs
2021 database of 1251 patients with brain tumors. Quantitative and qualitative
results show that our method outperforms the state of the art, and implements
an effective new idea for merging multi-information within deep neural
networks.
Related papers
- Application of Multimodal Fusion Deep Learning Model in Disease Recognition [14.655086303102575]
This paper introduces an innovative multi-modal fusion deep learning approach to overcome the drawbacks of traditional single-modal recognition techniques.
During the feature extraction stage, cutting-edge deep learning models are applied to distill advanced features from image-based, temporal, and structured data sources.
The findings demonstrate significant advantages of the multimodal fusion model across multiple evaluation metrics.
arXiv Detail & Related papers (2024-05-22T23:09:49Z) - A Multimodal Feature Distillation with CNN-Transformer Network for Brain Tumor Segmentation with Incomplete Modalities [15.841483814265592]
We propose a Multimodal feature distillation with Convolutional Neural Network (CNN)-Transformer hybrid network (MCTSeg) for accurate brain tumor segmentation with missing modalities.
Our ablation study demonstrates the importance of the proposed modules with CNN-Transformer networks and the convolutional blocks in Transformer for improving the performance of brain tumor segmentation with missing modalities.
arXiv Detail & Related papers (2024-04-22T09:33:44Z) - Exploiting Partial Common Information Microstructure for Multi-Modal
Brain Tumor Segmentation [11.583406152227637]
Learning with multiple modalities is crucial for automated brain tumor segmentation from magnetic resonance imaging data.
Existing approaches are oblivious to partial common information shared by subsets of the modalities.
In this paper, we show that identifying such partial common information can significantly boost the discriminative power of image segmentation models.
arXiv Detail & Related papers (2023-02-06T01:28:52Z) - Cross-Modality Deep Feature Learning for Brain Tumor Segmentation [158.8192041981564]
This paper proposes a novel cross-modality deep feature learning framework to segment brain tumors from the multi-modality MRI data.
The core idea is to mine rich patterns across the multi-modality data to make up for the insufficient data scale.
Comprehensive experiments are conducted on the BraTS benchmarks, which show that the proposed cross-modality deep feature learning framework can effectively improve the brain tumor segmentation performance.
arXiv Detail & Related papers (2022-01-07T07:46:01Z) - Cross-Modality Brain Tumor Segmentation via Bidirectional
Global-to-Local Unsupervised Domain Adaptation [61.01704175938995]
In this paper, we propose a novel Bidirectional Global-to-Local (BiGL) adaptation framework under a UDA scheme.
Specifically, a bidirectional image synthesis and segmentation module is proposed to segment the brain tumor.
The proposed method outperforms several state-of-the-art unsupervised domain adaptation methods by a large margin.
arXiv Detail & Related papers (2021-05-17T10:11:45Z) - A Multi-View Dynamic Fusion Framework: How to Improve the Multimodal
Brain Tumor Segmentation from Multi-Views? [5.793853101758628]
This paper proposes a multi-view dynamic fusion framework to improve the performance of brain tumor segmentation.
By evaluating the proposed framework on BRATS 2015 and BRATS 2018, it can be found that the fusion results from multi-views achieve a better performance than the segmentation result from the single view.
arXiv Detail & Related papers (2020-12-21T09:45:23Z) - Towards Cross-modality Medical Image Segmentation with Online Mutual
Knowledge Distillation [71.89867233426597]
In this paper, we aim to exploit the prior knowledge learned from one modality to improve the segmentation performance on another modality.
We propose a novel Mutual Knowledge Distillation scheme to thoroughly exploit the modality-shared knowledge.
Experimental results on the public multi-class cardiac segmentation data, i.e., MMWHS 2017, show that our method achieves large improvements on CT segmentation.
arXiv Detail & Related papers (2020-10-04T10:25:13Z) - Robust Multimodal Brain Tumor Segmentation via Feature Disentanglement
and Gated Fusion [71.87627318863612]
We propose a novel multimodal segmentation framework which is robust to the absence of imaging modalities.
Our network uses feature disentanglement to decompose the input modalities into the modality-specific appearance code.
We validate our method on the important yet challenging multimodal brain tumor segmentation task with the BRATS challenge dataset.
arXiv Detail & Related papers (2020-02-22T14:32:04Z) - MS-Net: Multi-Site Network for Improving Prostate Segmentation with
Heterogeneous MRI Data [75.73881040581767]
We propose a novel multi-site network (MS-Net) for improving prostate segmentation by learning robust representations.
Our MS-Net improves the performance across all datasets consistently, and outperforms state-of-the-art methods for multi-site learning.
arXiv Detail & Related papers (2020-02-09T14:11:50Z) - Unpaired Multi-modal Segmentation via Knowledge Distillation [77.39798870702174]
We propose a novel learning scheme for unpaired cross-modality image segmentation.
In our method, we heavily reuse network parameters, by sharing all convolutional kernels across CT and MRI.
We have extensively validated our approach on two multi-class segmentation problems.
arXiv Detail & Related papers (2020-01-06T20:03:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.