Max-Fusion U-Net for Multi-Modal Pathology Segmentation with Attention
and Dynamic Resampling
- URL: http://arxiv.org/abs/2009.02569v1
- Date: Sat, 5 Sep 2020 17:24:23 GMT
- Title: Max-Fusion U-Net for Multi-Modal Pathology Segmentation with Attention
and Dynamic Resampling
- Authors: Haochuan Jiang, Chengjia Wang, Agisilaos Chartsias, Sotirios A.
Tsaftaris
- Abstract summary: The performance of relevant algorithms is significantly affected by the proper fusion of the multi-modal information.
We present the Max-Fusion U-Net that achieves improved pathology segmentation performance.
We evaluate our methods using the Myocardial pathology segmentation (MyoPS) combining the multi-sequence CMR dataset.
- Score: 13.542898009730804
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automatic segmentation of multi-sequence (multi-modal) cardiac MR (CMR)
images plays a significant role in diagnosis and management for a variety of
cardiac diseases. However, the performance of relevant algorithms is
significantly affected by the proper fusion of the multi-modal information.
Furthermore, particular diseases, such as myocardial infarction, display
irregular shapes on images and occupy small regions at random locations. These
facts make pathology segmentation of multi-modal CMR images a challenging task.
In this paper, we present the Max-Fusion U-Net that achieves improved pathology
segmentation performance given aligned multi-modal images of LGE, T2-weighted,
and bSSFP modalities. Specifically, modality-specific features are extracted by
dedicated encoders. Then they are fused with the pixel-wise maximum operator.
Together with the corresponding encoding features, these representations are
propagated to decoding layers with U-Net skip-connections. Furthermore, a
spatial-attention module is applied in the last decoding layer to encourage the
network to focus on those small semantically meaningful pathological regions
that trigger relatively high responses by the network neurons. We also use a
simple image patch extraction strategy to dynamically resample training
examples with varying spacial and batch sizes. With limited GPU memory, this
strategy reduces the imbalance of classes and forces the model to focus on
regions around the interested pathology. It further improves segmentation
accuracy and reduces the mis-classification of pathology. We evaluate our
methods using the Myocardial pathology segmentation (MyoPS) combining the
multi-sequence CMR dataset which involves three modalities. Extensive
experiments demonstrate the effectiveness of the proposed model which
outperforms the related baselines.
Related papers
- Med-TTT: Vision Test-Time Training model for Medical Image Segmentation [5.318153305245246]
We propose Med-TTT, a visual backbone network integrated with Test-Time Training layers.
The model achieves leading performance in terms of accuracy, sensitivity, and Dice coefficient.
arXiv Detail & Related papers (2024-10-03T14:29:46Z) - Modality-agnostic Domain Generalizable Medical Image Segmentation by Multi-Frequency in Multi-Scale Attention [1.1155836879100416]
We propose a Modality-agnostic Domain Generalizable Network (MADGNet) for medical image segmentation.
MFMSA block refines the process of spatial feature extraction, particularly in capturing boundary features.
E-SDM mitigates information loss in multi-task learning with deep supervision.
arXiv Detail & Related papers (2024-05-10T07:34:36Z) - NeuroPictor: Refining fMRI-to-Image Reconstruction via Multi-individual Pretraining and Multi-level Modulation [55.51412454263856]
This paper proposes to directly modulate the generation process of diffusion models using fMRI signals.
By training with about 67,000 fMRI-image pairs from various individuals, our model enjoys superior fMRI-to-image decoding capacity.
arXiv Detail & Related papers (2024-03-27T02:42:52Z) - Dual-scale Enhanced and Cross-generative Consistency Learning for Semi-supervised Medical Image Segmentation [49.57907601086494]
Medical image segmentation plays a crucial role in computer-aided diagnosis.
We propose a novel Dual-scale Enhanced and Cross-generative consistency learning framework for semi-supervised medical image (DEC-Seg)
arXiv Detail & Related papers (2023-12-26T12:56:31Z) - Convolutional neural network based on sparse graph attention mechanism
for MRI super-resolution [0.34410212782758043]
Medical image super-resolution (SR) reconstruction using deep learning techniques can enhance lesion analysis and assist doctors in improving diagnostic efficiency and accuracy.
Existing deep learning-based SR methods rely on convolutional neural networks (CNNs), which inherently limit the expressive capabilities of these models.
We propose an A-network that utilizes multiple convolution operator feature extraction modules (MCO) for extracting image features.
arXiv Detail & Related papers (2023-05-29T06:14:22Z) - Two-stage MR Image Segmentation Method for Brain Tumors based on
Attention Mechanism [27.08977505280394]
A coordination-spatial attention generation adversarial network (CASP-GAN) based on the cycle-consistent generative adversarial network (CycleGAN) is proposed.
The performance of the generator is optimized by introducing the Coordinate Attention (CA) module and the Spatial Attention (SA) module.
The ability to extract the structure information and the detailed information of the original medical image can help generate the desired image with higher quality.
arXiv Detail & Related papers (2023-04-17T08:34:41Z) - M$^{2}$SNet: Multi-scale in Multi-scale Subtraction Network for Medical
Image Segmentation [73.10707675345253]
We propose a general multi-scale in multi-scale subtraction network (M$2$SNet) to finish diverse segmentation from medical image.
Our method performs favorably against most state-of-the-art methods under different evaluation metrics on eleven datasets of four different medical image segmentation tasks.
arXiv Detail & Related papers (2023-03-20T06:26:49Z) - RetiFluidNet: A Self-Adaptive and Multi-Attention Deep Convolutional
Network for Retinal OCT Fluid Segmentation [3.57686754209902]
Quantification of retinal fluids is necessary for OCT-guided treatment management.
New convolutional neural architecture named RetiFluidNet is proposed for multi-class retinal fluid segmentation.
Model benefits from hierarchical representation learning of textural, contextual, and edge features.
arXiv Detail & Related papers (2022-09-26T07:18:00Z) - Modality Completion via Gaussian Process Prior Variational Autoencoders
for Multi-Modal Glioma Segmentation [75.58395328700821]
We propose a novel model, Multi-modal Gaussian Process Prior Variational Autoencoder (MGP-VAE), to impute one or more missing sub-modalities for a patient scan.
MGP-VAE can leverage the Gaussian Process (GP) prior on the Variational Autoencoder (VAE) to utilize the subjects/patients and sub-modalities correlations.
We show the applicability of MGP-VAE on brain tumor segmentation where either, two, or three of four sub-modalities may be missing.
arXiv Detail & Related papers (2021-07-07T19:06:34Z) - Few-shot Medical Image Segmentation using a Global Correlation Network
with Discriminative Embedding [60.89561661441736]
We propose a novel method for few-shot medical image segmentation.
We construct our few-shot image segmentor using a deep convolutional network trained episodically.
We enhance discriminability of deep embedding to encourage clustering of the feature domains of the same class.
arXiv Detail & Related papers (2020-12-10T04:01:07Z) - Unpaired Multi-modal Segmentation via Knowledge Distillation [77.39798870702174]
We propose a novel learning scheme for unpaired cross-modality image segmentation.
In our method, we heavily reuse network parameters, by sharing all convolutional kernels across CT and MRI.
We have extensively validated our approach on two multi-class segmentation problems.
arXiv Detail & Related papers (2020-01-06T20:03:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.