Multi-Decoder Networks with Multi-Denoising Inputs for Tumor
Segmentation
- URL: http://arxiv.org/abs/2012.03684v1
- Date: Mon, 16 Nov 2020 12:58:03 GMT
- Title: Multi-Decoder Networks with Multi-Denoising Inputs for Tumor
Segmentation
- Authors: Minh H. Vu and Tufve Nyholm and Tommy L\"ofstedt
- Abstract summary: We develop an end-to-end deep-learning-based segmentation method using a multi-decoder architecture.
We also propose to apply smoothing methods to the input images to generate denoised versions as additional inputs to the network.
- Score: 2.0625936401496237
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Automatic segmentation of brain glioma from multimodal MRI scans plays a key
role in clinical trials and practice. Unfortunately, manual segmentation is
very challenging, time-consuming, costly, and often inaccurate despite human
expertise due to the high variance and high uncertainty in the human
annotations. In the present work, we develop an end-to-end deep-learning-based
segmentation method using a multi-decoder architecture by jointly learning
three separate sub-problems using a partly shared encoder. We also propose to
apply smoothing methods to the input images to generate denoised versions as
additional inputs to the network. The validation performance indicate an
improvement when using the proposed method. The proposed method was ranked 2nd
in the task of Quantification of Uncertainty in Segmentation in the Brain
Tumors in Multimodal Magnetic Resonance Imaging Challenge 2020.
Related papers
- MultiMAE for Brain MRIs: Robustness to Missing Inputs Using Multi-Modal Masked Autoencoder [18.774351784192266]
Missing input sequences are common in medical imaging data, posing a challenge for deep learning models reliant on complete input data.<n>We develop a masked autoencoder (MAE) paradigm for multi-modal, multi-task learning in 3D medical imaging with brain MRIs.
arXiv Detail & Related papers (2025-09-14T21:33:59Z) - MulModSeg: Enhancing Unpaired Multi-Modal Medical Image Segmentation with Modality-Conditioned Text Embedding and Alternating Training [10.558275557142137]
We propose a simple Multi-Modal (MulModSeg) strategy to enhance medical image segmentation across multiple modalities.
MulModSeg incorporates a modality-conditioned text embedding framework via a frozen text encoder.
It consistently outperforms previous methods in segmenting abdominal multiorgan and cardiac substructures for both CT and MR.
arXiv Detail & Related papers (2024-11-23T14:37:01Z) - Unsupervised Segmentation of Fetal Brain MRI using Deep Learning
Cascaded Registration [2.494736313545503]
Traditional deep learning-based automatic segmentation requires extensive training data with ground-truth labels.
We propose a novel method based on multi-atlas segmentation, that accurately segments multiple tissues without relying on labeled data for training.
Our method employs a cascaded deep learning network for 3D image registration, which computes small, incremental deformations to the moving image to align it precisely with the fixed image.
arXiv Detail & Related papers (2023-07-07T13:17:12Z) - M$^{2}$SNet: Multi-scale in Multi-scale Subtraction Network for Medical
Image Segmentation [73.10707675345253]
We propose a general multi-scale in multi-scale subtraction network (M$2$SNet) to finish diverse segmentation from medical image.
Our method performs favorably against most state-of-the-art methods under different evaluation metrics on eleven datasets of four different medical image segmentation tasks.
arXiv Detail & Related papers (2023-03-20T06:26:49Z) - Mixed-UNet: Refined Class Activation Mapping for Weakly-Supervised
Semantic Segmentation with Multi-scale Inference [28.409679398886304]
We develop a novel model named Mixed-UNet, which has two parallel branches in the decoding phase.
We evaluate the designed Mixed-UNet against several prevalent deep learning-based segmentation approaches on our dataset collected from the local hospital and public datasets.
arXiv Detail & Related papers (2022-05-06T08:37:02Z) - Duo-SegNet: Adversarial Dual-Views for Semi-Supervised Medical Image
Segmentation [14.535295064959746]
We propose a semi-supervised image segmentation technique based on the concept of multi-view learning.
Our proposed method outperforms state-of-the-art medical image segmentation algorithms consistently and comfortably.
arXiv Detail & Related papers (2021-08-25T10:16:12Z) - Modality Completion via Gaussian Process Prior Variational Autoencoders
for Multi-Modal Glioma Segmentation [75.58395328700821]
We propose a novel model, Multi-modal Gaussian Process Prior Variational Autoencoder (MGP-VAE), to impute one or more missing sub-modalities for a patient scan.
MGP-VAE can leverage the Gaussian Process (GP) prior on the Variational Autoencoder (VAE) to utilize the subjects/patients and sub-modalities correlations.
We show the applicability of MGP-VAE on brain tumor segmentation where either, two, or three of four sub-modalities may be missing.
arXiv Detail & Related papers (2021-07-07T19:06:34Z) - Towards Cross-modality Medical Image Segmentation with Online Mutual
Knowledge Distillation [71.89867233426597]
In this paper, we aim to exploit the prior knowledge learned from one modality to improve the segmentation performance on another modality.
We propose a novel Mutual Knowledge Distillation scheme to thoroughly exploit the modality-shared knowledge.
Experimental results on the public multi-class cardiac segmentation data, i.e., MMWHS 2017, show that our method achieves large improvements on CT segmentation.
arXiv Detail & Related papers (2020-10-04T10:25:13Z) - Robust Medical Instrument Segmentation Challenge 2019 [56.148440125599905]
Intraoperative tracking of laparoscopic instruments is often a prerequisite for computer and robotic-assisted interventions.
Our challenge was based on a surgical data set comprising 10,040 annotated images acquired from a total of 30 surgical procedures.
The results confirm the initial hypothesis, namely that algorithm performance degrades with an increasing domain gap.
arXiv Detail & Related papers (2020-03-23T14:35:08Z) - Robust Multimodal Brain Tumor Segmentation via Feature Disentanglement
and Gated Fusion [71.87627318863612]
We propose a novel multimodal segmentation framework which is robust to the absence of imaging modalities.
Our network uses feature disentanglement to decompose the input modalities into the modality-specific appearance code.
We validate our method on the important yet challenging multimodal brain tumor segmentation task with the BRATS challenge dataset.
arXiv Detail & Related papers (2020-02-22T14:32:04Z) - Unpaired Multi-modal Segmentation via Knowledge Distillation [77.39798870702174]
We propose a novel learning scheme for unpaired cross-modality image segmentation.
In our method, we heavily reuse network parameters, by sharing all convolutional kernels across CT and MRI.
We have extensively validated our approach on two multi-class segmentation problems.
arXiv Detail & Related papers (2020-01-06T20:03:17Z) - Transfer Learning for Brain Tumor Segmentation [0.6408773096179187]
Gliomas are the most common malignant brain tumors that are treated with chemoradiotherapy and surgery.
Recent advances in deep learning have led to convolutional neural network architectures that excel at various visual recognition tasks.
In this work, we construct FCNs with pretrained convolutional encoders. We show that we can stabilize the training process this way and achieve an improvement with respect to dice scores and Hausdorff distances.
arXiv Detail & Related papers (2019-12-28T12:45:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.