SCPMan: Shape Context and Prior Constrained Multi-scale Attention
Network for Pancreatic Segmentation
- URL: http://arxiv.org/abs/2312.15859v1
- Date: Tue, 26 Dec 2023 03:00:25 GMT
- Title: SCPMan: Shape Context and Prior Constrained Multi-scale Attention
Network for Pancreatic Segmentation
- Authors: Leilei Zeng, Xuechen Li, Xinquan Yang, Linlin Shen, Song Wu
- Abstract summary: We propose a multiscale attention network with shape context and prior constraint for robust pancreas segmentation.
Our architecture provides robust segmentation performance, against the blurry boundaries, and variations in scale and shape of pancreas.
- Score: 39.70422146937986
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Due to the poor prognosis of Pancreatic cancer, accurate early detection and
segmentation are critical for improving treatment outcomes. However, pancreatic
segmentation is challenged by blurred boundaries, high shape variability, and
class imbalance. To tackle these problems, we propose a multiscale attention
network with shape context and prior constraint for robust pancreas
segmentation. Specifically, we proposed a Multi-scale Feature Extraction Module
(MFE) and a Mixed-scale Attention Integration Module (MAI) to address unclear
pancreas boundaries. Furthermore, a Shape Context Memory (SCM) module is
introduced to jointly model semantics across scales and pancreatic shape.
Active Shape Model (ASM) is further used to model the shape priors. Experiments
on NIH and MSD datasets demonstrate the efficacy of our model, which improves
the state-of-the-art Dice Score for 1.01% and 1.03% respectively. Our
architecture provides robust segmentation performance, against the blurry
boundaries, and variations in scale and shape of pancreas.
Related papers
- SAM-EG: Segment Anything Model with Egde Guidance framework for efficient Polyp Segmentation [6.709243857842895]
We propose a framework that guides small segmentation models for polyp segmentation to address the cost challenge.
In this study, we introduce the Edge Guiding module, which integrates edge information into image features.
Our small models showcase their efficacy by achieving competitive results with state-of-the-art methods.
arXiv Detail & Related papers (2024-06-21T01:42:20Z) - M3BUNet: Mobile Mean Max UNet for Pancreas Segmentation on CT-Scans [25.636974007788986]
We propose M3BUNet, a fusion of MobileNet and U-Net neural networks, equipped with a novel Mean-Max (MM) attention that operates in two stages to gradually segment pancreas CT images.
For the fine segmentation stage, we found that applying a wavelet decomposition filter to create multi-input images enhances pancreas segmentation performance.
Our approach demonstrates a considerable performance improvement, achieving an average Dice Similarity Coefficient (DSC) value of up to 89.53% and an Intersection Over Union (IOU) score of up to 81.16 for the NIH pancreas dataset.
arXiv Detail & Related papers (2024-01-18T23:10:08Z) - Dual-scale Enhanced and Cross-generative Consistency Learning for
Semi-supervised Polyp Segmentation [52.06525450636897]
Automatic polyp segmentation plays a crucial role in the early diagnosis and treatment of colorectal cancer.
Existing methods rely heavily on fully supervised training, which requires a large amount of labeled data with time-consuming pixel-wise annotations.
We propose a novel Dual-scale Enhanced and Cross-generative consistency learning framework for semi-supervised polyp (DEC-Seg) from colonoscopy images.
arXiv Detail & Related papers (2023-12-26T12:56:31Z) - CARE: A Large Scale CT Image Dataset and Clinical Applicable Benchmark
Model for Rectal Cancer Segmentation [8.728236864462302]
Rectal cancer segmentation of CT image plays a crucial role in timely clinical diagnosis, radiotherapy treatment, and follow-up.
These obstacles arise from the intricate anatomical structures of the rectum and the difficulties in performing differential diagnosis of rectal cancer.
To address these issues, this work introduces a novel large scale rectal cancer CT image dataset CARE with pixel-level annotations for both normal and cancerous rectum.
We also propose a novel medical cancer lesion segmentation benchmark model named U-SAM.
The model is specifically designed to tackle the challenges posed by the intricate anatomical structures of abdominal organs by incorporating prompt information.
arXiv Detail & Related papers (2023-08-16T10:51:27Z) - Diffusion Models for Counterfactual Generation and Anomaly Detection in
Brain Images [59.85702949046042]
We present a weakly supervised method to generate a healthy version of a diseased image and then use it to obtain a pixel-wise anomaly map.
We employ a diffusion model trained on healthy samples and combine Denoising Diffusion Probabilistic Model (DDPM) and Denoising Implicit Model (DDIM) at each step of the sampling process.
We verify that when our method is applied to healthy samples, the input images are reconstructed without significant modifications.
arXiv Detail & Related papers (2023-08-03T21:56:50Z) - Abdominal organ segmentation via deep diffeomorphic mesh deformations [5.4173776411667935]
Abdominal organ segmentation from CT and MRI is an essential prerequisite for surgical planning and computer-aided navigation systems.
We employ template-based mesh reconstruction methods for joint liver, kidney, pancreas, and spleen segmentation.
The resulting method, UNetFlow, generalizes well to all four organs and can be easily fine-tuned on new data.
arXiv Detail & Related papers (2023-06-27T14:41:18Z) - Learnable Weight Initialization for Volumetric Medical Image Segmentation [66.3030435676252]
We propose a learnable weight-based hybrid medical image segmentation approach.
Our approach is easy to integrate into any hybrid model and requires no external training data.
Experiments on multi-organ and lung cancer segmentation tasks demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2023-06-15T17:55:05Z) - Reliable Joint Segmentation of Retinal Edema Lesions in OCT Images [55.83984261827332]
In this paper, we propose a novel reliable multi-scale wavelet-enhanced transformer network.
We develop a novel segmentation backbone that integrates a wavelet-enhanced feature extractor network and a multi-scale transformer module.
Our proposed method achieves better segmentation accuracy with a high degree of reliability as compared to other state-of-the-art segmentation approaches.
arXiv Detail & Related papers (2022-12-01T07:32:56Z) - A multi-organ point cloud registration algorithm for abdominal CT
registration [5.0338371688780965]
In this work, we focus on accurately registering a subset of organs of interest.
We introduce MO-BCPD, a multi-organ version of the BCPD algorithm.
The target registration error on anatomical landmarks is almost twice as small for MO-BCPD compared to standard BCPD.
arXiv Detail & Related papers (2022-03-15T16:27:29Z) - Improving Classification Model Performance on Chest X-Rays through Lung
Segmentation [63.45024974079371]
We propose a deep learning approach to enhance abnormal chest x-ray (CXR) identification performance through segmentations.
Our approach is designed in a cascaded manner and incorporates two modules: a deep neural network with criss-cross attention modules (XLSor) for localizing lung region in CXR images and a CXR classification model with a backbone of a self-supervised momentum contrast (MoCo) model pre-trained on large-scale CXR data sets.
arXiv Detail & Related papers (2022-02-22T15:24:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.