Brain Tumor Segmentation Network Using Attention-based Fusion and
Spatial Relationship Constraint
- URL: http://arxiv.org/abs/2010.15647v2
- Date: Sat, 31 Oct 2020 07:34:53 GMT
- Title: Brain Tumor Segmentation Network Using Attention-based Fusion and
Spatial Relationship Constraint
- Authors: Chenyu Liu, Wangbin Ding, Lei Li, Zhen Zhang, Chenhao Pei, Liqin
Huang, Xiahai Zhuang
- Abstract summary: We develop a novel multi-modal tumor segmentation network (MMTSN) to robustly segment brain tumors based on multi-modal MR images.
We evaluate our method on the test set of multi-modal brain tumor segmentation challenge 2020 (BraTs 2020)
- Score: 19.094164029068462
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Delineating the brain tumor from magnetic resonance (MR) images is critical
for the treatment of gliomas. However, automatic delineation is challenging due
to the complex appearance and ambiguous outlines of tumors. Considering that
multi-modal MR images can reflect different tumor biological properties, we
develop a novel multi-modal tumor segmentation network (MMTSN) to robustly
segment brain tumors based on multi-modal MR images. The MMTSN is composed of
three sub-branches and a main branch. Specifically, the sub-branches are used
to capture different tumor features from multi-modal images, while in the main
branch, we design a spatial-channel fusion block (SCFB) to effectively
aggregate multi-modal features. Additionally, inspired by the fact that the
spatial relationship between sub-regions of tumor is relatively fixed, e.g.,
the enhancing tumor is always in the tumor core, we propose a spatial loss to
constrain the relationship between different sub-regions of tumor. We evaluate
our method on the test set of multi-modal brain tumor segmentation challenge
2020 (BraTs2020). The method achieves 0.8764, 0.8243 and 0.773 dice score for
whole tumor, tumor core and enhancing tumor, respectively.
Related papers
- Towards Generalizable Tumor Synthesis [48.45704270448412]
Tumor synthesis enables the creation of artificial tumors in medical images, facilitating the training of AI models for tumor detection and segmentation.
This paper made a progressive stride toward generalizable tumor synthesis by leveraging a critical observation.
We have ascertained that generative AI models, e.g., Diffusion Models, can create realistic tumors generalized to a range of organs even when trained on a limited number of tumor examples from only one organ.
arXiv Detail & Related papers (2024-02-29T18:57:39Z) - Cross-modality Guidance-aided Multi-modal Learning with Dual Attention
for MRI Brain Tumor Grading [47.50733518140625]
Brain tumor represents one of the most fatal cancers around the world, and is very common in children and the elderly.
We propose a novel cross-modality guidance-aided multi-modal learning with dual attention for addressing the task of MRI brain tumor grading.
arXiv Detail & Related papers (2024-01-17T07:54:49Z) - Glioblastoma Tumor Segmentation using an Ensemble of Vision Transformers [0.0]
Glioblastoma is one of the most aggressive and deadliest types of brain cancer.
Brain Radiology Aided by Intelligent Neural NETworks (BRAINNET) generates robust tumor segmentation maks.
arXiv Detail & Related papers (2023-11-09T18:55:27Z) - Prediction of brain tumor recurrence location based on multi-modal
fusion and nonlinear correlation learning [55.789874096142285]
We present a deep learning-based brain tumor recurrence location prediction network.
We first train a multi-modal brain tumor segmentation network on the public dataset BraTS 2021.
Then, the pre-trained encoder is transferred to our private dataset for extracting the rich semantic features.
Two decoders are constructed to jointly segment the present brain tumor and predict its future tumor recurrence location.
arXiv Detail & Related papers (2023-04-11T02:45:38Z) - Feature-enhanced Generation and Multi-modality Fusion based Deep Neural
Network for Brain Tumor Segmentation with Missing MR Modalities [2.867517731896504]
The main problem is that not all types of MRIs are always available in clinical exams.
We propose a novel brain tumor segmentation network in the case of missing one or more modalities.
The proposed network consists of three sub-networks: a feature-enhanced generator, a correlation constraint block and a segmentation network.
arXiv Detail & Related papers (2021-11-08T10:59:40Z) - Learn-Morph-Infer: a new way of solving the inverse problem for brain
tumor modeling [1.1214822628210914]
We introduce a methodology for inferring patient-specific spatial distribution of brain tumor from T1Gd and FLAIR MRI medical scans.
Coined as itLearn-Morph-Infer, the method achieves real-time performance in the order of minutes on widely available hardware.
arXiv Detail & Related papers (2021-11-07T13:45:35Z) - Dilated Inception U-Net (DIU-Net) for Brain Tumor Segmentation [0.9176056742068814]
We propose a new end-to-end brain tumor segmentation architecture based on U-Net.
Our proposed model performed significantly better than the state-of-the-art U-Net-based model for tumor core and whole tumor segmentation.
arXiv Detail & Related papers (2021-08-15T16:04:09Z) - Triplet Contrastive Learning for Brain Tumor Classification [99.07846518148494]
We present a novel approach of directly learning deep embeddings for brain tumor types, which can be used for downstream tasks such as classification.
We evaluate our method on an extensive brain tumor dataset which consists of 27 different tumor classes, out of which 13 are defined as rare.
arXiv Detail & Related papers (2021-08-08T11:26:34Z) - H2NF-Net for Brain Tumor Segmentation using Multimodal MR Imaging: 2nd
Place Solution to BraTS Challenge 2020 Segmentation Task [96.49879910148854]
Our H2NF-Net uses the single and cascaded HNF-Nets to segment different brain tumor sub-regions.
We trained and evaluated our model on the Multimodal Brain Tumor Challenge (BraTS) 2020 dataset.
Our method won the second place in the BraTS 2020 challenge segmentation task out of nearly 80 participants.
arXiv Detail & Related papers (2020-12-30T20:44:55Z) - Soft Tissue Sarcoma Co-Segmentation in Combined MRI and PET/CT Data [2.2515303891664358]
Tumor segmentation in multimodal medical images has seen a growing trend towards deep learning based methods.
We propose a simultaneous co-segmentation method, which enables multimodal feature learning through modality-specific encoder and decoder branches.
We demonstrate the effectiveness of our approach on public soft tissue sarcoma data, which comprises MRI (T1 and T2 sequence) and PET/CT scans.
arXiv Detail & Related papers (2020-08-28T09:15:42Z) - Stan: Small tumor-aware network for breast ultrasound image segmentation [68.8204255655161]
We propose a novel deep learning architecture called Small Tumor-Aware Network (STAN) to improve the performance of segmenting tumors with different size.
The proposed approach outperformed the state-of-the-art approaches in segmenting small breast tumors.
arXiv Detail & Related papers (2020-02-03T22:25:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.