3D Medical Image Segmentation based on multi-scale MPU-Net
- URL: http://arxiv.org/abs/2307.05799v2
- Date: Mon, 24 Jul 2023 19:13:20 GMT
- Title: 3D Medical Image Segmentation based on multi-scale MPU-Net
- Authors: Zeqiu.Yu, Shuo.Han, Ziheng.Song
- Abstract summary: This paper proposes a tumor segmentation model MPU-Net for patient volume CT images.
It is inspired by Transformer with a global attention mechanism.
Compared with the benchmark model U-Net, MPU-Net shows excellent segmentation results.
- Score: 5.393743755706745
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The high cure rate of cancer is inextricably linked to physicians' accuracy
in diagnosis and treatment, therefore a model that can accomplish
high-precision tumor segmentation has become a necessity in many applications
of the medical industry. It can effectively lower the rate of misdiagnosis
while considerably lessening the burden on clinicians. However, fully automated
target organ segmentation is problematic due to the irregular stereo structure
of 3D volume organs. As a basic model for this class of real applications,
U-Net excels. It can learn certain global and local features, but still lacks
the capacity to grasp spatial long-range relationships and contextual
information at multiple scales. This paper proposes a tumor segmentation model
MPU-Net for patient volume CT images, which is inspired by Transformer with a
global attention mechanism. By combining image serialization with the Position
Attention Module, the model attempts to comprehend deeper contextual
dependencies and accomplish precise positioning. Each layer of the decoder is
also equipped with a multi-scale module and a cross-attention mechanism. The
capability of feature extraction and integration at different levels has been
enhanced, and the hybrid loss function developed in this study can better
exploit high-resolution characteristic information. Moreover, the suggested
architecture is tested and evaluated on the Liver Tumor Segmentation Challenge
2017 (LiTS 2017) dataset. Compared with the benchmark model U-Net, MPU-Net
shows excellent segmentation results. The dice, accuracy, precision,
specificity, IOU, and MCC metrics for the best model segmentation results are
92.17%, 99.08%, 91.91%, 99.52%, 85.91%, and 91.74%, respectively. Outstanding
indicators in various aspects illustrate the exceptional performance of this
framework in automatic medical image segmentation.
Related papers
- Improving 3D Medical Image Segmentation at Boundary Regions using Local Self-attention and Global Volume Mixing [14.0825980706386]
Volumetric medical image segmentation is a fundamental problem in medical image analysis where the objective is to accurately classify a given 3D volumetric medical image with voxel-level precision.
In this work, we propose a novel hierarchical encoder-decoder-based framework that strives to explicitly capture the local and global dependencies for 3D medical image segmentation.
The proposed framework exploits local volume-based self-attention to encode the local dependencies at high resolution and introduces a novel volumetric-mixer to capture the global dependencies at low-resolution feature representations.
arXiv Detail & Related papers (2024-10-20T11:08:38Z) - TotalSegmentator MRI: Sequence-Independent Segmentation of 59 Anatomical Structures in MR images [62.53931644063323]
In this study we extended the capabilities of TotalSegmentator to MR images.
We trained an nnU-Net segmentation algorithm on this dataset and calculated similarity coefficients (Dice) to evaluate the model's performance.
The model significantly outperformed two other publicly available segmentation models (Dice score 0.824 versus 0.762; p0.001 and 0.762 versus 0.542; p)
arXiv Detail & Related papers (2024-05-29T20:15:54Z) - Optimizing Universal Lesion Segmentation: State Space Model-Guided Hierarchical Networks with Feature Importance Adjustment [0.0]
We introduce Mamba-Ahnet, a novel integration of State Space Model (SSM) and Advanced Hierarchical Network (AHNet) within the MAMBA framework.
Mamba-Ahnet combines SSM's feature extraction and comprehension with AHNet's attention mechanisms and image reconstruction, aiming to enhance segmentation accuracy and robustness.
arXiv Detail & Related papers (2024-04-26T08:15:43Z) - LATUP-Net: A Lightweight 3D Attention U-Net with Parallel Convolutions for Brain Tumor Segmentation [7.1789008189318455]
LATUP-Net is a lightweight 3D ATtention U-Net with Parallel convolutions.
It is specifically designed to reduce computational requirements significantly while maintaining high segmentation performance.
It achieves promising segmentation performance: the average Dice scores for the whole tumor, tumor core, and enhancing tumor on the BraTS 2020 dataset are 88.41%, 83.82%, and 73.67%, and on the BraTS 2021 dataset, they are 90.29%, 89.54%, and 83.92%, respectively.
arXiv Detail & Related papers (2024-04-09T00:05:45Z) - QUBIQ: Uncertainty Quantification for Biomedical Image Segmentation Challenge [93.61262892578067]
Uncertainty in medical image segmentation tasks, especially inter-rater variability, presents a significant challenge.
This variability directly impacts the development and evaluation of automated segmentation algorithms.
We report the set-up and summarize the benchmark results of the Quantification of Uncertainties in Biomedical Image Quantification Challenge (QUBIQ)
arXiv Detail & Related papers (2024-03-19T17:57:24Z) - PMFSNet: Polarized Multi-scale Feature Self-attention Network For
Lightweight Medical Image Segmentation [6.134314911212846]
Current state-of-the-art medical image segmentation methods prioritize accuracy but often at the expense of increased computational demands and larger model sizes.
We propose PMFSNet, a novel medical imaging segmentation model that balances global local feature processing while avoiding computational redundancy.
It incorporates a plug-and-play PMFS block, a multi-scale feature enhancement module based on attention mechanisms, to capture long-term dependencies.
arXiv Detail & Related papers (2024-01-15T10:26:47Z) - Dual-scale Enhanced and Cross-generative Consistency Learning for Semi-supervised Medical Image Segmentation [49.57907601086494]
Medical image segmentation plays a crucial role in computer-aided diagnosis.
We propose a novel Dual-scale Enhanced and Cross-generative consistency learning framework for semi-supervised medical image (DEC-Seg)
arXiv Detail & Related papers (2023-12-26T12:56:31Z) - 3DSAM-adapter: Holistic adaptation of SAM from 2D to 3D for promptable tumor segmentation [52.699139151447945]
We propose a novel adaptation method for transferring the segment anything model (SAM) from 2D to 3D for promptable medical image segmentation.
Our model can outperform domain state-of-the-art medical image segmentation models on 3 out of 4 tasks, specifically by 8.25%, 29.87%, and 10.11% for kidney tumor, pancreas tumor, colon cancer segmentation, and achieve similar performance for liver tumor segmentation.
arXiv Detail & Related papers (2023-06-23T12:09:52Z) - Reliable Joint Segmentation of Retinal Edema Lesions in OCT Images [55.83984261827332]
In this paper, we propose a novel reliable multi-scale wavelet-enhanced transformer network.
We develop a novel segmentation backbone that integrates a wavelet-enhanced feature extractor network and a multi-scale transformer module.
Our proposed method achieves better segmentation accuracy with a high degree of reliability as compared to other state-of-the-art segmentation approaches.
arXiv Detail & Related papers (2022-12-01T07:32:56Z) - Large-Kernel Attention for 3D Medical Image Segmentation [14.76728117630242]
In this paper, a novel large- kernel (LK) attention module is proposed to achieve accurate multi-organ segmentation and tumor segmentation.
The advantages of convolution and self-attention are combined in the proposed LK attention module, including local contextual information, long-range dependence, and channel adaptation.
The module also decomposes the LK convolution to optimize the computational cost and can be easily incorporated into FCNs such as U-Net.
arXiv Detail & Related papers (2022-07-19T16:32:55Z) - MISSU: 3D Medical Image Segmentation via Self-distilling TransUNet [55.16833099336073]
We propose to self-distill a Transformer-based UNet for medical image segmentation.
It simultaneously learns global semantic information and local spatial-detailed features.
Our MISSU achieves the best performance over previous state-of-the-art methods.
arXiv Detail & Related papers (2022-06-02T07:38:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.