Rethinking Abdominal Organ Segmentation (RAOS) in the clinical scenario: A robustness evaluation benchmark with challenging cases
- URL: http://arxiv.org/abs/2406.13674v1
- Date: Wed, 19 Jun 2024 16:23:42 GMT
- Title: Rethinking Abdominal Organ Segmentation (RAOS) in the clinical scenario: A robustness evaluation benchmark with challenging cases
- Authors: Xiangde Luo, Zihan Li, Shaoting Zhang, Wenjun Liao, Guotai Wang,
- Abstract summary: RAOS dataset comprises 413 CT scans from 413 patients with 17 (female) or 19 (male) labelled organs, manually delineated by oncologists.
We grouped scans based on clinical information into 1) diagnosis/radiotherapy (317 volumes), 2) partial excision without the whole organ missing (22 volumes), and 3) excision with the whole organ missing (74 volumes)
RAOS provides a potential benchmark for evaluating model robustness including organ hallucination.
- Score: 18.908677670131276
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning has enabled great strides in abdominal multi-organ segmentation, even surpassing junior oncologists on common cases or organs. However, robustness on corner cases and complex organs remains a challenging open problem for clinical adoption. To investigate model robustness, we collected and annotated the RAOS dataset comprising 413 CT scans ($\sim$80k 2D images, $\sim$8k 3D organ annotations) from 413 patients each with 17 (female) or 19 (male) labelled organs, manually delineated by oncologists. We grouped scans based on clinical information into 1) diagnosis/radiotherapy (317 volumes), 2) partial excision without the whole organ missing (22 volumes), and 3) excision with the whole organ missing (74 volumes). RAOS provides a potential benchmark for evaluating model robustness including organ hallucination. It also includes some organs that can be very hard to access on public datasets like the rectum, colon, intestine, prostate and seminal vesicles. We benchmarked several state-of-the-art methods in these three clinical groups to evaluate performance and robustness. We also assessed cross-generalization between RAOS and three public datasets. This dataset and comprehensive analysis establish a potential baseline for future robustness research: \url{https://github.com/Luoxd1996/RAOS}.
Related papers
- PanTS: The Pancreatic Tumor Segmentation Dataset [49.32814895560867]
PanTS is a large-scale, multi-institutional dataset curated to advance research in pancreatic CT analysis.<n>It contains 36,390 CT scans from 145 medical centers, with expert-validated, voxel-wise annotations of over 993,000 anatomical structures.
arXiv Detail & Related papers (2025-07-02T02:10:46Z) - Rethinking Whole-Body CT Image Interpretation: An Abnormality-Centric Approach [57.86418347491272]
We propose a comprehensive hierarchical classification system, with 404 representative abnormal findings across all body regions.<n>We contribute a dataset containing over 14.5K CT images from multiple planes and all human body regions, and meticulously provide grounding annotations for over 19K abnormalities.<n>We propose OminiAbnorm-CT, which can automatically ground and describe abnormal findings on multi-plane and whole-body CT images based on text queries.
arXiv Detail & Related papers (2025-06-03T17:57:34Z) - A Continual Learning-driven Model for Accurate and Generalizable Segmentation of Clinically Comprehensive and Fine-grained Whole-body Anatomies in CT [67.34586036959793]
There is no fully annotated CT dataset with all anatomies delineated for training.
We propose a novel continual learning-driven CT model that can segment complete anatomies.
Our single unified CT segmentation model, CL-Net, can highly accurately segment a clinically comprehensive set of 235 fine-grained whole-body anatomies.
arXiv Detail & Related papers (2025-03-16T23:55:02Z) - The ULS23 Challenge: a Baseline Model and Benchmark Dataset for 3D Universal Lesion Segmentation in Computed Tomography [0.0]
We introduce the ULS23 benchmark for 3D universal lesion segmentation in chest-abdomen-pelvis CT examinations.
The ULS23 training dataset contains 38,693 lesions across this region, including challenging pancreatic, colon and bone lesions.
arXiv Detail & Related papers (2024-06-07T19:37:59Z) - The RSNA Abdominal Traumatic Injury CT (RATIC) Dataset [1.234134271688463]
The RSNA Abdominal Traumatic Injury CT (RATIC) dataset is the largest publicly available collection of adult abdominal studies annotated for traumatic injuries.
This dataset includes 4,274 studies from 23 institutions across 14 countries.
The dataset is freely available for non-commercial use via Kaggle at https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection.
arXiv Detail & Related papers (2024-05-30T01:18:50Z) - Advances in Kidney Biopsy Lesion Assessment through Dense Instance Segmentation [0.3926357402982764]
Lesion scores made by renal pathologists are semi-quantitative and exhibit high inter-observer variability.
DiffRegFormer is a computational-friendly framework that can efficiently recognize over 500 objects across three anatomical classes.
Our approach outperforms previous methods, achieving an Average Precision of 52.1% (detection) and 46.8% (segmentation)
arXiv Detail & Related papers (2023-09-29T11:59:57Z) - Accurate Fine-Grained Segmentation of Human Anatomy in Radiographs via
Volumetric Pseudo-Labeling [66.75096111651062]
We created a large-scale dataset of 10,021 thoracic CTs with 157 labels.
We applied an ensemble of 3D anatomy segmentation models to extract anatomical pseudo-labels.
Our resulting segmentation models demonstrated remarkable performance on CXR.
arXiv Detail & Related papers (2023-06-06T18:01:08Z) - Med-Query: Steerable Parsing of 9-DoF Medical Anatomies with Query
Embedding [15.98677736544302]
We propose a steerable, robust, and efficient computing framework for detection, identification, and segmentation of anatomies in 3D medical data.
Considering complicated shapes, sizes and orientations of anatomies, we present the nine degrees-of-freedom (9-DoF) pose estimation solution in full 3D space.
We have validated the proposed method on three medical imaging parsing tasks of ribs, spine, and abdominal organs.
arXiv Detail & Related papers (2022-12-05T04:04:21Z) - TotalSegmentator: robust segmentation of 104 anatomical structures in CT
images [48.50994220135258]
We present a deep learning segmentation model for body CT images.
The model can segment 104 anatomical structures relevant for use cases such as organ volumetry, disease characterization, and surgical or radiotherapy planning.
arXiv Detail & Related papers (2022-08-11T15:16:40Z) - A unified 3D framework for Organs at Risk Localization and Segmentation
for Radiation Therapy Planning [56.52933974838905]
Current medical workflow requires manual delineation of organs-at-risk (OAR)
In this work, we aim to introduce a unified 3D pipeline for OAR localization-segmentation.
Our proposed framework fully enables the exploitation of 3D context information inherent in medical imaging.
arXiv Detail & Related papers (2022-03-01T17:08:41Z) - FocusNetv2: Imbalanced Large and Small Organ Segmentation with
Adversarial Shape Constraint for Head and Neck CT Images [82.48587399026319]
delineation of organs-at-risk (OARs) is a vital step in radiotherapy treatment planning to avoid damage to healthy organs.
We propose a novel two-stage deep neural network, FocusNetv2, to solve this challenging problem by automatically locating, ROI-pooling, and segmenting small organs.
In addition to our original FocusNet, we employ a novel adversarial shape constraint on small organs to ensure the consistency between estimated small-organ shapes and organ shape prior knowledge.
arXiv Detail & Related papers (2021-04-05T04:45:31Z) - RAP-Net: Coarse-to-Fine Multi-Organ Segmentation with Single Random
Anatomical Prior [4.177877537413942]
coarse-to-fine abdominal multi-organ segmentation facilitates to extract high-resolution segmentation.
We propose a single refined model to segment all abdominal organs instead of multiple organ corresponding models.
Our proposed method outperforms the state-of-the-art on 13 models with an average Dice score 84.58% versus 81.69% (p0.0001)
arXiv Detail & Related papers (2020-12-23T00:22:05Z) - Fully Automated and Standardized Segmentation of Adipose Tissue
Compartments by Deep Learning in Three-dimensional Whole-body MRI of
Epidemiological Cohort Studies [11.706960468832301]
Quantification and localization of different adipose tissue compartments from whole-body MR images is of high interest to examine metabolic conditions.
We propose a 3D convolutional neural network (DCNet) to provide a robust and objective segmentation.
Fast (5-7seconds) and reliable adipose tissue segmentation can be obtained with high Dice overlap.
arXiv Detail & Related papers (2020-08-05T17:30:14Z) - Deep Reinforcement Learning for Organ Localization in CT [59.23083161858951]
We propose a deep reinforcement learning approach for organ localization in CT.
In this work, an artificial agent is actively self-taught to localize organs in CT by learning from its asserts and mistakes.
Our method can use as a plug-and-play module for localizing any organ of interest.
arXiv Detail & Related papers (2020-05-11T10:06:13Z) - VerSe: A Vertebrae Labelling and Segmentation Benchmark for
Multi-detector CT Images [121.31355003451152]
Large Scale Vertebrae Challenge (VerSe) was organised in conjunction with the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) in 2019 and 2020.
We present the the results of this evaluation and further investigate the performance-variation at vertebra-level, scan-level, and at different fields-of-view.
arXiv Detail & Related papers (2020-01-24T21:09:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.