CARE: A Large Scale CT Image Dataset and Clinical Applicable Benchmark
Model for Rectal Cancer Segmentation
- URL: http://arxiv.org/abs/2308.08283v1
- Date: Wed, 16 Aug 2023 10:51:27 GMT
- Title: CARE: A Large Scale CT Image Dataset and Clinical Applicable Benchmark
Model for Rectal Cancer Segmentation
- Authors: Hantao Zhang, Weidong Guo, Chenyang Qiu, Shouhong Wan, Bingbing Zou,
Wanqin Wang, Peiquan Jin
- Abstract summary: Rectal cancer segmentation of CT image plays a crucial role in timely clinical diagnosis, radiotherapy treatment, and follow-up.
These obstacles arise from the intricate anatomical structures of the rectum and the difficulties in performing differential diagnosis of rectal cancer.
To address these issues, this work introduces a novel large scale rectal cancer CT image dataset CARE with pixel-level annotations for both normal and cancerous rectum.
We also propose a novel medical cancer lesion segmentation benchmark model named U-SAM.
The model is specifically designed to tackle the challenges posed by the intricate anatomical structures of abdominal organs by incorporating prompt information.
- Score: 8.728236864462302
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Rectal cancer segmentation of CT image plays a crucial role in timely
clinical diagnosis, radiotherapy treatment, and follow-up. Although current
segmentation methods have shown promise in delineating cancerous tissues, they
still encounter challenges in achieving high segmentation precision. These
obstacles arise from the intricate anatomical structures of the rectum and the
difficulties in performing differential diagnosis of rectal cancer.
Additionally, a major obstacle is the lack of a large-scale, finely annotated
CT image dataset for rectal cancer segmentation. To address these issues, this
work introduces a novel large scale rectal cancer CT image dataset CARE with
pixel-level annotations for both normal and cancerous rectum, which serves as a
valuable resource for algorithm research and clinical application development.
Moreover, we propose a novel medical cancer lesion segmentation benchmark model
named U-SAM. The model is specifically designed to tackle the challenges posed
by the intricate anatomical structures of abdominal organs by incorporating
prompt information. U-SAM contains three key components: promptable information
(e.g., points) to aid in target area localization, a convolution module for
capturing low-level lesion details, and skip-connections to preserve and
recover spatial information during the encoding-decoding process. To evaluate
the effectiveness of U-SAM, we systematically compare its performance with
several popular segmentation methods on the CARE dataset. The generalization of
the model is further verified on the WORD dataset. Extensive experiments
demonstrate that the proposed U-SAM outperforms state-of-the-art methods on
these two datasets. These experiments can serve as the baseline for future
research and clinical application development.
Related papers
- EP-SAM: Weakly Supervised Histopathology Segmentation via Enhanced Prompt with Segment Anything [3.760646312664378]
Pathological diagnosis of diseases like cancer has conventionally relied on the evaluation of morphological features by physicians and pathologists.
Recent advancements in compute-aided diagnosis (CAD) systems are gaining significant attention as diagnostic support tools.
We present a weakly supervised semantic segmentation (WSSS) model by combining class activation map and Segment Anything Model (SAM)-based pseudo-labeling.
arXiv Detail & Related papers (2024-10-17T14:55:09Z) - MedCLIP-SAMv2: Towards Universal Text-Driven Medical Image Segmentation [2.2585213273821716]
We introduce MedCLIP-SAMv2, a novel framework that integrates the CLIP and SAM models to perform segmentation on clinical scans.
Our approach includes fine-tuning the BiomedCLIP model with a new Decoupled Hard Negative Noise Contrastive Estimation (DHN-NCE) loss.
We also investigate using zero-shot segmentation labels within a weakly supervised paradigm to enhance segmentation quality further.
arXiv Detail & Related papers (2024-09-28T23:10:37Z) - Towards a Benchmark for Colorectal Cancer Segmentation in Endorectal Ultrasound Videos: Dataset and Model Development [59.74920439478643]
In this paper, we collect and annotated the first benchmark dataset that covers diverse ERUS scenarios.
Our ERUS-10K dataset comprises 77 videos and 10,000 high-resolution annotated frames.
We introduce a benchmark model for colorectal cancer segmentation, named the Adaptive Sparse-context TRansformer (ASTR)
arXiv Detail & Related papers (2024-08-19T15:04:42Z) - A Lung Nodule Dataset with Histopathology-based Cancer Type Annotation [12.617587827105496]
This research aims to bridge the gap by providing publicly accessible datasets and reliable tools for medical diagnosis.
We curated a diverse dataset of lung Computed Tomography (CT) images, comprising 330 annotated nodules (nodules are labeled as bounding boxes) from 95 distinct patients.
These promising results demonstrate that the dataset has a feasible application and further facilitate intelligent auxiliary diagnosis.
arXiv Detail & Related papers (2024-06-26T06:39:11Z) - Meply: A Large-scale Dataset and Baseline Evaluations for Metastatic Perirectal Lymph Node Detection and Segmentation [10.250943622693429]
We present the first large-scale perirectal metastatic lymph node CT image dataset called Meply.
We introduce a novel lymph-node segmentation model named CoSAM.
The CoSAM utilizes sequence-based detection to guide the segmentation of metastatic lymph nodes in rectal cancer.
arXiv Detail & Related papers (2024-04-13T07:30:16Z) - MedCLIP-SAM: Bridging Text and Image Towards Universal Medical Image Segmentation [2.2585213273821716]
We propose a novel framework, called MedCLIP-SAM, that combines CLIP and SAM models to generate segmentation of clinical scans.
By extensively testing three diverse segmentation tasks and medical image modalities, our proposed framework has demonstrated excellent accuracy.
arXiv Detail & Related papers (2024-03-29T15:59:11Z) - Mask-Enhanced Segment Anything Model for Tumor Lesion Semantic Segmentation [48.107348956719775]
We introduce Mask-Enhanced SAM (M-SAM), an innovative architecture tailored for 3D tumor lesion segmentation.
We propose a novel Mask-Enhanced Adapter (MEA) within M-SAM that enriches the semantic information of medical images with positional data from coarse segmentation masks.
Our M-SAM achieves high segmentation accuracy and also exhibits robust generalization.
arXiv Detail & Related papers (2024-03-09T13:37:02Z) - Revisiting Computer-Aided Tuberculosis Diagnosis [56.80999479735375]
Tuberculosis (TB) is a major global health threat, causing millions of deaths annually.
Computer-aided tuberculosis diagnosis (CTD) using deep learning has shown promise, but progress is hindered by limited training data.
We establish a large-scale dataset, namely the Tuberculosis X-ray (TBX11K) dataset, which contains 11,200 chest X-ray (CXR) images with corresponding bounding box annotations for TB areas.
This dataset enables the training of sophisticated detectors for high-quality CTD.
arXiv Detail & Related papers (2023-07-06T08:27:48Z) - Improving Classification Model Performance on Chest X-Rays through Lung
Segmentation [63.45024974079371]
We propose a deep learning approach to enhance abnormal chest x-ray (CXR) identification performance through segmentations.
Our approach is designed in a cascaded manner and incorporates two modules: a deep neural network with criss-cross attention modules (XLSor) for localizing lung region in CXR images and a CXR classification model with a backbone of a self-supervised momentum contrast (MoCo) model pre-trained on large-scale CXR data sets.
arXiv Detail & Related papers (2022-02-22T15:24:06Z) - Weakly supervised multiple instance learning histopathological tumor
segmentation [51.085268272912415]
We propose a weakly supervised framework for whole slide imaging segmentation.
We exploit a multiple instance learning scheme for training models.
The proposed framework has been evaluated on multi-locations and multi-centric public data from The Cancer Genome Atlas and the PatchCamelyon dataset.
arXiv Detail & Related papers (2020-04-10T13:12:47Z) - Stan: Small tumor-aware network for breast ultrasound image segmentation [68.8204255655161]
We propose a novel deep learning architecture called Small Tumor-Aware Network (STAN) to improve the performance of segmenting tumors with different size.
The proposed approach outperformed the state-of-the-art approaches in segmenting small breast tumors.
arXiv Detail & Related papers (2020-02-03T22:25:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.