Soft Tissue Sarcoma Co-Segmentation in Combined MRI and PET/CT Data
- URL: http://arxiv.org/abs/2008.12544v2
- Date: Thu, 24 Sep 2020 09:10:17 GMT
- Title: Soft Tissue Sarcoma Co-Segmentation in Combined MRI and PET/CT Data
- Authors: Theresa Neubauer, Maria Wimmer, Astrid Berg, David Major, Dimitrios
Lenis, Thomas Beyer, Jelena Saponjski, Katja B\"uhler
- Abstract summary: Tumor segmentation in multimodal medical images has seen a growing trend towards deep learning based methods.
We propose a simultaneous co-segmentation method, which enables multimodal feature learning through modality-specific encoder and decoder branches.
We demonstrate the effectiveness of our approach on public soft tissue sarcoma data, which comprises MRI (T1 and T2 sequence) and PET/CT scans.
- Score: 2.2515303891664358
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Tumor segmentation in multimodal medical images has seen a growing trend
towards deep learning based methods. Typically, studies dealing with this topic
fuse multimodal image data to improve the tumor segmentation contour for a
single imaging modality. However, they do not take into account that tumor
characteristics are emphasized differently by each modality, which affects the
tumor delineation. Thus, the tumor segmentation is modality- and
task-dependent. This is especially the case for soft tissue sarcomas, where,
due to necrotic tumor tissue, the segmentation differs vastly. Closing this
gap, we develop a modalityspecific sarcoma segmentation model that utilizes
multimodal image data to improve the tumor delineation on each individual
modality. We propose a simultaneous co-segmentation method, which enables
multimodal feature learning through modality-specific encoder and decoder
branches, and the use of resource-effcient densely connected convolutional
layers. We further conduct experiments to analyze how different input
modalities and encoder-decoder fusion strategies affect the segmentation
result. We demonstrate the effectiveness of our approach on public soft tissue
sarcoma data, which comprises MRI (T1 and T2 sequence) and PET/CT scans. The
results show that our multimodal co-segmentation model provides better
modality-specific tumor segmentation than models using only the PET or MRI (T1
and T2) scan as input.
Related papers
- Unified HT-CNNs Architecture: Transfer Learning for Segmenting Diverse Brain Tumors in MRI from Gliomas to Pediatric Tumors [2.104687387907779]
We introduce HT-CNNs, an ensemble of Hybrid Transformers and Convolutional Neural Networks optimized through transfer learning for varied brain tumor segmentation.
This method captures spatial and contextual details from MRI data, fine-tuned on diverse datasets representing common tumor types.
Our findings underscore the potential of transfer learning and ensemble approaches in medical image segmentation, indicating a substantial enhancement in clinical decision-making and patient care.
arXiv Detail & Related papers (2024-12-11T09:52:01Z) - Enhanced MRI Representation via Cross-series Masking [48.09478307927716]
Cross-Series Masking (CSM) Strategy for effectively learning MRI representation in a self-supervised manner.
Method achieves state-of-the-art performance on both public and in-house datasets.
arXiv Detail & Related papers (2024-12-10T10:32:09Z) - Cross-modality Guidance-aided Multi-modal Learning with Dual Attention
for MRI Brain Tumor Grading [47.50733518140625]
Brain tumor represents one of the most fatal cancers around the world, and is very common in children and the elderly.
We propose a novel cross-modality guidance-aided multi-modal learning with dual attention for addressing the task of MRI brain tumor grading.
arXiv Detail & Related papers (2024-01-17T07:54:49Z) - Automated ensemble method for pediatric brain tumor segmentation [0.0]
This study introduces a novel ensemble approach using ONet and modified versions of UNet.
Data augmentation ensures robustness and accuracy across different scanning protocols.
Results indicate that this advanced ensemble approach offers promising prospects for enhanced diagnostic accuracy.
arXiv Detail & Related papers (2023-08-14T15:29:32Z) - Cross-Modality Deep Feature Learning for Brain Tumor Segmentation [158.8192041981564]
This paper proposes a novel cross-modality deep feature learning framework to segment brain tumors from the multi-modality MRI data.
The core idea is to mine rich patterns across the multi-modality data to make up for the insufficient data scale.
Comprehensive experiments are conducted on the BraTS benchmarks, which show that the proposed cross-modality deep feature learning framework can effectively improve the brain tumor segmentation performance.
arXiv Detail & Related papers (2022-01-07T07:46:01Z) - Modality Completion via Gaussian Process Prior Variational Autoencoders
for Multi-Modal Glioma Segmentation [75.58395328700821]
We propose a novel model, Multi-modal Gaussian Process Prior Variational Autoencoder (MGP-VAE), to impute one or more missing sub-modalities for a patient scan.
MGP-VAE can leverage the Gaussian Process (GP) prior on the Variational Autoencoder (VAE) to utilize the subjects/patients and sub-modalities correlations.
We show the applicability of MGP-VAE on brain tumor segmentation where either, two, or three of four sub-modalities may be missing.
arXiv Detail & Related papers (2021-07-07T19:06:34Z) - Latent Correlation Representation Learning for Brain Tumor Segmentation
with Missing MRI Modalities [2.867517731896504]
Accurately segmenting brain tumor from MR images is the key to clinical diagnostics and treatment planning.
It's common to miss some imaging modalities in clinical practice.
We present a novel brain tumor segmentation algorithm with missing modalities.
arXiv Detail & Related papers (2021-04-13T14:21:09Z) - Brain Tumor Segmentation Network Using Attention-based Fusion and
Spatial Relationship Constraint [19.094164029068462]
We develop a novel multi-modal tumor segmentation network (MMTSN) to robustly segment brain tumors based on multi-modal MR images.
We evaluate our method on the test set of multi-modal brain tumor segmentation challenge 2020 (BraTs 2020)
arXiv Detail & Related papers (2020-10-29T14:51:10Z) - Multimodal Spatial Attention Module for Targeting Multimodal PET-CT Lung
Tumor Segmentation [11.622615048002567]
Multimodal spatial attention module (MSAM) learns to emphasize regions related to tumors.
MSAM can be applied to common backbone architectures and trained end-to-end.
arXiv Detail & Related papers (2020-07-29T10:27:22Z) - Robust Multimodal Brain Tumor Segmentation via Feature Disentanglement
and Gated Fusion [71.87627318863612]
We propose a novel multimodal segmentation framework which is robust to the absence of imaging modalities.
Our network uses feature disentanglement to decompose the input modalities into the modality-specific appearance code.
We validate our method on the important yet challenging multimodal brain tumor segmentation task with the BRATS challenge dataset.
arXiv Detail & Related papers (2020-02-22T14:32:04Z) - Stan: Small tumor-aware network for breast ultrasound image segmentation [68.8204255655161]
We propose a novel deep learning architecture called Small Tumor-Aware Network (STAN) to improve the performance of segmenting tumors with different size.
The proposed approach outperformed the state-of-the-art approaches in segmenting small breast tumors.
arXiv Detail & Related papers (2020-02-03T22:25:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.