CARE: A Large Scale CT Image Dataset and Clinical Applicable Benchmark
Model for Rectal Cancer Segmentation
- URL: http://arxiv.org/abs/2308.08283v1
- Date: Wed, 16 Aug 2023 10:51:27 GMT
- Title: CARE: A Large Scale CT Image Dataset and Clinical Applicable Benchmark
Model for Rectal Cancer Segmentation
- Authors: Hantao Zhang, Weidong Guo, Chenyang Qiu, Shouhong Wan, Bingbing Zou,
Wanqin Wang, Peiquan Jin
- Abstract summary: Rectal cancer segmentation of CT image plays a crucial role in timely clinical diagnosis, radiotherapy treatment, and follow-up.
These obstacles arise from the intricate anatomical structures of the rectum and the difficulties in performing differential diagnosis of rectal cancer.
To address these issues, this work introduces a novel large scale rectal cancer CT image dataset CARE with pixel-level annotations for both normal and cancerous rectum.
We also propose a novel medical cancer lesion segmentation benchmark model named U-SAM.
The model is specifically designed to tackle the challenges posed by the intricate anatomical structures of abdominal organs by incorporating prompt information.
- Score: 8.728236864462302
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Rectal cancer segmentation of CT image plays a crucial role in timely
clinical diagnosis, radiotherapy treatment, and follow-up. Although current
segmentation methods have shown promise in delineating cancerous tissues, they
still encounter challenges in achieving high segmentation precision. These
obstacles arise from the intricate anatomical structures of the rectum and the
difficulties in performing differential diagnosis of rectal cancer.
Additionally, a major obstacle is the lack of a large-scale, finely annotated
CT image dataset for rectal cancer segmentation. To address these issues, this
work introduces a novel large scale rectal cancer CT image dataset CARE with
pixel-level annotations for both normal and cancerous rectum, which serves as a
valuable resource for algorithm research and clinical application development.
Moreover, we propose a novel medical cancer lesion segmentation benchmark model
named U-SAM. The model is specifically designed to tackle the challenges posed
by the intricate anatomical structures of abdominal organs by incorporating
prompt information. U-SAM contains three key components: promptable information
(e.g., points) to aid in target area localization, a convolution module for
capturing low-level lesion details, and skip-connections to preserve and
recover spatial information during the encoding-decoding process. To evaluate
the effectiveness of U-SAM, we systematically compare its performance with
several popular segmentation methods on the CARE dataset. The generalization of
the model is further verified on the WORD dataset. Extensive experiments
demonstrate that the proposed U-SAM outperforms state-of-the-art methods on
these two datasets. These experiments can serve as the baseline for future
research and clinical application development.
Related papers
- A Lung Nodule Dataset with Histopathology-based Cancer Type Annotation [12.617587827105496]
This research aims to bridge the gap by providing publicly accessible datasets and reliable tools for medical diagnosis.
We curated a diverse dataset of lung Computed Tomography (CT) images, comprising 330 annotated nodules (nodules are labeled as bounding boxes) from 95 distinct patients.
These promising results demonstrate that the dataset has a feasible application and further facilitate intelligent auxiliary diagnosis.
arXiv Detail & Related papers (2024-06-26T06:39:11Z) - Meply: A Large-scale Dataset and Baseline Evaluations for Metastatic Perirectal Lymph Node Detection and Segmentation [10.250943622693429]
We present the first large-scale perirectal metastatic lymph node CT image dataset called Meply.
We introduce a novel lymph-node segmentation model named CoSAM.
The CoSAM utilizes sequence-based detection to guide the segmentation of metastatic lymph nodes in rectal cancer.
arXiv Detail & Related papers (2024-04-13T07:30:16Z) - MedCLIP-SAM: Bridging Text and Image Towards Universal Medical Image Segmentation [2.2585213273821716]
We propose a novel framework, called MedCLIP-SAM, that combines CLIP and SAM models to generate segmentation of clinical scans.
By extensively testing three diverse segmentation tasks and medical image modalities, our proposed framework has demonstrated excellent accuracy.
arXiv Detail & Related papers (2024-03-29T15:59:11Z) - Mask-Enhanced Segment Anything Model for Tumor Lesion Semantic Segmentation [48.107348956719775]
We introduce Mask-Enhanced SAM (M-SAM), an innovative architecture tailored for 3D tumor lesion segmentation.
We propose a novel Mask-Enhanced Adapter (MEA) within M-SAM that enriches the semantic information of medical images with positional data from coarse segmentation masks.
Our M-SAM achieves high segmentation accuracy and also exhibits robust generalization.
arXiv Detail & Related papers (2024-03-09T13:37:02Z) - Intelligent Breast Cancer Diagnosis with Heuristic-assisted
Trans-Res-U-Net and Multiscale DenseNet using Mammogram Images [0.0]
Breast cancer (BC) significantly contributes to cancer-related mortality in women.
accurately distinguishing malignant mass lesions remains challenging.
We propose a novel deep learning approach for BC screening utilizing mammography images.
arXiv Detail & Related papers (2023-10-30T10:22:14Z) - AG-CRC: Anatomy-Guided Colorectal Cancer Segmentation in CT with
Imperfect Anatomical Knowledge [9.961742312147674]
We develop a novel Anatomy-Guided segmentation framework to exploit the auto-generated organ masks.
We extensively evaluate the proposed method on two CRC segmentation datasets.
arXiv Detail & Related papers (2023-10-07T03:22:06Z) - Revisiting Computer-Aided Tuberculosis Diagnosis [56.80999479735375]
Tuberculosis (TB) is a major global health threat, causing millions of deaths annually.
Computer-aided tuberculosis diagnosis (CTD) using deep learning has shown promise, but progress is hindered by limited training data.
We establish a large-scale dataset, namely the Tuberculosis X-ray (TBX11K) dataset, which contains 11,200 chest X-ray (CXR) images with corresponding bounding box annotations for TB areas.
This dataset enables the training of sophisticated detectors for high-quality CTD.
arXiv Detail & Related papers (2023-07-06T08:27:48Z) - Improving Classification Model Performance on Chest X-Rays through Lung
Segmentation [63.45024974079371]
We propose a deep learning approach to enhance abnormal chest x-ray (CXR) identification performance through segmentations.
Our approach is designed in a cascaded manner and incorporates two modules: a deep neural network with criss-cross attention modules (XLSor) for localizing lung region in CXR images and a CXR classification model with a backbone of a self-supervised momentum contrast (MoCo) model pre-trained on large-scale CXR data sets.
arXiv Detail & Related papers (2022-02-22T15:24:06Z) - Weakly supervised multiple instance learning histopathological tumor
segmentation [51.085268272912415]
We propose a weakly supervised framework for whole slide imaging segmentation.
We exploit a multiple instance learning scheme for training models.
The proposed framework has been evaluated on multi-locations and multi-centric public data from The Cancer Genome Atlas and the PatchCamelyon dataset.
arXiv Detail & Related papers (2020-04-10T13:12:47Z) - Stan: Small tumor-aware network for breast ultrasound image segmentation [68.8204255655161]
We propose a novel deep learning architecture called Small Tumor-Aware Network (STAN) to improve the performance of segmenting tumors with different size.
The proposed approach outperformed the state-of-the-art approaches in segmenting small breast tumors.
arXiv Detail & Related papers (2020-02-03T22:25:01Z) - VerSe: A Vertebrae Labelling and Segmentation Benchmark for
Multi-detector CT Images [121.31355003451152]
Large Scale Vertebrae Challenge (VerSe) was organised in conjunction with the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) in 2019 and 2020.
We present the the results of this evaluation and further investigate the performance-variation at vertebra-level, scan-level, and at different fields-of-view.
arXiv Detail & Related papers (2020-01-24T21:09:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.