A review: Deep learning for medical image segmentation using
multi-modality fusion
- URL: http://arxiv.org/abs/2004.10664v2
- Date: Thu, 16 Jul 2020 15:33:31 GMT
- Title: A review: Deep learning for medical image segmentation using
multi-modality fusion
- Authors: Tongxue Zhou, Su Ruan, St\'ephane Canu
- Abstract summary: Multi-modality is widely used in medical imaging, because it can provide multiinformation about a target.
Deep learning-based approaches have presented the state-of-the-art performance in image classification, segmentation, object detection and tracking tasks.
In this paper, we give an overview of deep learning-based approaches for multi-modal medical image segmentation task.
- Score: 4.4259821861544
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-modality is widely used in medical imaging, because it can provide
multiinformation about a target (tumor, organ or tissue). Segmentation using
multimodality consists of fusing multi-information to improve the segmentation.
Recently, deep learning-based approaches have presented the state-of-the-art
performance in image classification, segmentation, object detection and
tracking tasks. Due to their self-learning and generalization ability over
large amounts of data, deep learning recently has also gained great interest in
multi-modal medical image segmentation. In this paper, we give an overview of
deep learning-based approaches for multi-modal medical image segmentation task.
Firstly, we introduce the general principle of deep learning and multi-modal
medical image segmentation. Secondly, we present different deep learning
network architectures, then analyze their fusion strategies and compare their
results. The earlier fusion is commonly used, since it's simple and it focuses
on the subsequent segmentation network architecture. However, the later fusion
gives more attention on fusion strategy to learn the complex relationship
between different modalities. In general, compared to the earlier fusion, the
later fusion can give more accurate result if the fusion method is effective
enough. We also discuss some common problems in medical image segmentation.
Finally, we summarize and provide some perspectives on the future research.
Related papers
- Fuse4Seg: Image-Level Fusion Based Multi-Modality Medical Image Segmentation [13.497613339200184]
We argue the current feature-level fusion strategy is prone to semantic inconsistencies and misalignments.
We introduce a novel image-level fusion based multi-modality medical image segmentation method, Fuse4Seg.
The resultant fused image is a coherent representation that accurately amalgamates information from all modalities.
arXiv Detail & Related papers (2024-09-16T14:39:04Z) - A review of deep learning-based information fusion techniques for multimodal medical image classification [1.996181818659251]
Deep learning-based multimodal fusion techniques have emerged as powerful tools for improving medical image classification.
This review offers a thorough analysis of the developments in deep learning-based multimodal fusion for medical classification tasks.
arXiv Detail & Related papers (2024-04-23T13:31:18Z) - Medical Image Analysis using Deep Relational Learning [1.8465474345655504]
We propose a context-aware fully convolutional network that effectively models implicit relation information between features to perform medical image segmentation.
We then propose a new hierarchical homography estimation network to achieve accurate medical image mosaicing by learning the explicit spatial relationship between adjacent frames.
arXiv Detail & Related papers (2023-03-28T16:10:12Z) - M$^{2}$SNet: Multi-scale in Multi-scale Subtraction Network for Medical
Image Segmentation [73.10707675345253]
We propose a general multi-scale in multi-scale subtraction network (M$2$SNet) to finish diverse segmentation from medical image.
Our method performs favorably against most state-of-the-art methods under different evaluation metrics on eleven datasets of four different medical image segmentation tasks.
arXiv Detail & Related papers (2023-03-20T06:26:49Z) - Self-Supervised Correction Learning for Semi-Supervised Biomedical Image
Segmentation [84.58210297703714]
We propose a self-supervised correction learning paradigm for semi-supervised biomedical image segmentation.
We design a dual-task network, including a shared encoder and two independent decoders for segmentation and lesion region inpainting.
Experiments on three medical image segmentation datasets for different tasks demonstrate the outstanding performance of our method.
arXiv Detail & Related papers (2023-01-12T08:19:46Z) - An Attention-based Multi-Scale Feature Learning Network for Multimodal
Medical Image Fusion [24.415389503712596]
Multimodal medical images could provide rich information about patients for physicians to diagnose.
The image fusion technique is able to synthesize complementary information from multimodal images into a single image.
We introduce a novel Dilated Residual Attention Network for the medical image fusion task.
arXiv Detail & Related papers (2022-12-09T04:19:43Z) - Multimodal Information Fusion for Glaucoma and DR Classification [1.5616442980374279]
Multimodal information is frequently available in medical tasks. By combining information from multiple sources, clinicians are able to make more accurate judgments.
Our paper investigates three multimodal information fusion strategies based on deep learning to solve retinal analysis tasks.
arXiv Detail & Related papers (2022-09-02T12:19:03Z) - Few-shot Medical Image Segmentation using a Global Correlation Network
with Discriminative Embedding [60.89561661441736]
We propose a novel method for few-shot medical image segmentation.
We construct our few-shot image segmentor using a deep convolutional network trained episodically.
We enhance discriminability of deep embedding to encourage clustering of the feature domains of the same class.
arXiv Detail & Related papers (2020-12-10T04:01:07Z) - Towards Cross-modality Medical Image Segmentation with Online Mutual
Knowledge Distillation [71.89867233426597]
In this paper, we aim to exploit the prior knowledge learned from one modality to improve the segmentation performance on another modality.
We propose a novel Mutual Knowledge Distillation scheme to thoroughly exploit the modality-shared knowledge.
Experimental results on the public multi-class cardiac segmentation data, i.e., MMWHS 2017, show that our method achieves large improvements on CT segmentation.
arXiv Detail & Related papers (2020-10-04T10:25:13Z) - Robust Multimodal Brain Tumor Segmentation via Feature Disentanglement
and Gated Fusion [71.87627318863612]
We propose a novel multimodal segmentation framework which is robust to the absence of imaging modalities.
Our network uses feature disentanglement to decompose the input modalities into the modality-specific appearance code.
We validate our method on the important yet challenging multimodal brain tumor segmentation task with the BRATS challenge dataset.
arXiv Detail & Related papers (2020-02-22T14:32:04Z) - Unpaired Multi-modal Segmentation via Knowledge Distillation [77.39798870702174]
We propose a novel learning scheme for unpaired cross-modality image segmentation.
In our method, we heavily reuse network parameters, by sharing all convolutional kernels across CT and MRI.
We have extensively validated our approach on two multi-class segmentation problems.
arXiv Detail & Related papers (2020-01-06T20:03:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.