HCMA-UNet: A Hybrid CNN-Mamba UNet with Inter-Slice Self-Attention for Efficient Breast Cancer Segmentation
- URL: http://arxiv.org/abs/2501.00751v1
- Date: Wed, 01 Jan 2025 06:42:57 GMT
- Title: HCMA-UNet: A Hybrid CNN-Mamba UNet with Inter-Slice Self-Attention for Efficient Breast Cancer Segmentation
- Authors: Haoxuan Li, Wei song, Peiwu Qin, Xi Yuan, Zhenglin Chen,
- Abstract summary: This study proposes a novel hybrid segmentation network, HCMA-UNet, for lesion segmentation of breast cancer.<n>Our network consists of a lightweight CNN backbone and a Multi-view Inter-Slice Self-Attention Mamba (MISM) module.<n>Our lightweight model achieves superior performance with 2.87M parameters and 126.44 GFLOPs.
- Score: 7.807738181550226
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Breast cancer lesion segmentation in DCE-MRI remains challenging due to heterogeneous tumor morphology and indistinct boundaries. To address these challenges, this study proposes a novel hybrid segmentation network, HCMA-UNet, for lesion segmentation of breast cancer. Our network consists of a lightweight CNN backbone and a Multi-view Inter-Slice Self-Attention Mamba (MISM) module. The MISM module integrates Visual State Space Block (VSSB) and Inter-Slice Self-Attention (ISSA) mechanism, effectively reducing parameters through Asymmetric Split Channel (ASC) strategy to achieve efficient tri-directional feature extraction. Our lightweight model achieves superior performance with 2.87M parameters and 126.44 GFLOPs. A Feature-guided Region-aware loss function (FRLoss) is proposed to enhance segmentation accuracy. Extensive experiments on one private and two public DCE-MRI breast cancer datasets demonstrate that our approach achieves state-of-the-art performance while maintaining computational efficiency. FRLoss also exhibits good cross-architecture generalization capabilities. The source code and dataset is available on this link.
Related papers
- XLSTM-HVED: Cross-Modal Brain Tumor Segmentation and MRI Reconstruction Method Using Vision XLSTM and Heteromodal Variational Encoder-Decoder [9.141615533517719]
We introduce the XLSTM-HVED model, which integrates a heteromodal encoder-decoder framework with the Vision XLSTM module to reconstruct missing MRI modalities.
Key innovation of our approach is the Self-Attention Variational (SAVE) module, which improves the integration of modal features.
Our experiments using the BraTS 2024 dataset demonstrate that our model significantly outperforms existing advanced methods in handling cases where modalities are missing.
arXiv Detail & Related papers (2024-12-09T09:04:02Z) - Prototype Learning Guided Hybrid Network for Breast Tumor Segmentation in DCE-MRI [58.809276442508256]
We propose a hybrid network via the combination of convolution neural network (CNN) and transformer layers.
The experimental results on private and public DCE-MRI datasets demonstrate that the proposed hybrid network superior performance than the state-of-the-art methods.
arXiv Detail & Related papers (2024-08-11T15:46:00Z) - ASPS: Augmented Segment Anything Model for Polyp Segmentation [77.25557224490075]
The Segment Anything Model (SAM) has introduced unprecedented potential for polyp segmentation.
SAM's Transformer-based structure prioritizes global and low-frequency information.
CFA integrates a trainable CNN encoder branch with a frozen ViT encoder, enabling the integration of domain-specific knowledge.
arXiv Detail & Related papers (2024-06-30T14:55:32Z) - Rotate to Scan: UNet-like Mamba with Triplet SSM Module for Medical Image Segmentation [8.686237221268584]
We propose Triplet Mamba-UNet as a new type of image segmentation network.
Our model achieves a one-third reduction in parameters compared to the previous VM-UNet.
arXiv Detail & Related papers (2024-03-26T13:40:18Z) - Mask-Enhanced Segment Anything Model for Tumor Lesion Semantic Segmentation [48.107348956719775]
We introduce Mask-Enhanced SAM (M-SAM), an innovative architecture tailored for 3D tumor lesion segmentation.
We propose a novel Mask-Enhanced Adapter (MEA) within M-SAM that enriches the semantic information of medical images with positional data from coarse segmentation masks.
Our M-SAM achieves high segmentation accuracy and also exhibits robust generalization.
arXiv Detail & Related papers (2024-03-09T13:37:02Z) - SAMIHS: Adaptation of Segment Anything Model for Intracranial Hemorrhage
Segmentation [18.867207134086193]
Intracranial hemorrhage segmentation is a crucial and challenging step in stroke diagnosis and surgical planning.
We propose a SAM-based parameter-efficient fine-tuning method, called SAMIHS, for intracranial hemorrhage segmentation.
Our experimental results on two public datasets demonstrate the effectiveness of our proposed method.
arXiv Detail & Related papers (2023-11-14T14:23:09Z) - ARHNet: Adaptive Region Harmonization for Lesion-aware Augmentation to
Improve Segmentation Performance [61.04246102067351]
We propose a foreground harmonization framework (ARHNet) to tackle intensity disparities and make synthetic images look more realistic.
We demonstrate the efficacy of our method in improving the segmentation performance using real and synthetic images.
arXiv Detail & Related papers (2023-07-02T10:39:29Z) - 3DSAM-adapter: Holistic adaptation of SAM from 2D to 3D for promptable tumor segmentation [52.699139151447945]
We propose a novel adaptation method for transferring the segment anything model (SAM) from 2D to 3D for promptable medical image segmentation.
Our model can outperform domain state-of-the-art medical image segmentation models on 3 out of 4 tasks, specifically by 8.25%, 29.87%, and 10.11% for kidney tumor, pancreas tumor, colon cancer segmentation, and achieve similar performance for liver tumor segmentation.
arXiv Detail & Related papers (2023-06-23T12:09:52Z) - Two-stage MR Image Segmentation Method for Brain Tumors based on
Attention Mechanism [27.08977505280394]
A coordination-spatial attention generation adversarial network (CASP-GAN) based on the cycle-consistent generative adversarial network (CycleGAN) is proposed.
The performance of the generator is optimized by introducing the Coordinate Attention (CA) module and the Spatial Attention (SA) module.
The ability to extract the structure information and the detailed information of the original medical image can help generate the desired image with higher quality.
arXiv Detail & Related papers (2023-04-17T08:34:41Z) - Hard Exudate Segmentation Supplemented by Super-Resolution with
Multi-scale Attention Fusion Module [14.021944194533644]
Hard exudates (HE) is the most specific biomarker for retina edema.
This paper proposes a novel hard exudates segmentation method named SS-MAF with an auxiliary super-resolution task.
We evaluate our method on two public lesion datasets, IDRiD and E-Ophtha.
arXiv Detail & Related papers (2022-11-17T08:25:04Z) - EMT-NET: Efficient multitask network for computer-aided diagnosis of
breast cancer [58.720142291102135]
We propose an efficient and light-weighted learning architecture to classify and segment breast tumors simultaneously.
We incorporate a segmentation task into a tumor classification network, which makes the backbone network learn representations focused on tumor regions.
The accuracy, sensitivity, and specificity of tumor classification is 88.6%, 94.1%, and 85.3%, respectively.
arXiv Detail & Related papers (2022-01-13T05:24:40Z) - Lung Cancer Lesion Detection in Histopathology Images Using Graph-Based
Sparse PCA Network [93.22587316229954]
We propose a graph-based sparse principal component analysis (GS-PCA) network, for automated detection of cancerous lesions on histological lung slides stained by hematoxylin and eosin (H&E)
We evaluate the performance of the proposed algorithm on H&E slides obtained from an SVM K-rasG12D lung cancer mouse model using precision/recall rates, F-score, Tanimoto coefficient, and area under the curve (AUC) of the receiver operator characteristic (ROC)
arXiv Detail & Related papers (2021-10-27T19:28:36Z) - Robust Deep AUC Maximization: A New Surrogate Loss and Empirical Studies
on Medical Image Classification [63.44396343014749]
We propose a new margin-based surrogate loss function for the AUC score.
It is more robust than the commonly used.
square loss while enjoying the same advantage in terms of large-scale optimization.
To the best of our knowledge, this is the first work that makes DAM succeed on large-scale medical image datasets.
arXiv Detail & Related papers (2020-12-06T03:41:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.