WORD: Revisiting Organs Segmentation in the Whole Abdominal Region
- URL: http://arxiv.org/abs/2111.02403v1
- Date: Wed, 3 Nov 2021 02:26:14 GMT
- Title: WORD: Revisiting Organs Segmentation in the Whole Abdominal Region
- Authors: Xiangde Luo, Wenjun Liao, Jianghong Xiao, Tao Song, Xiaofan Zhang,
Kang Li, Guotai Wang, and Shaoting Zhang
- Abstract summary: Whole abdominal organs segmentation plays an important role in abdomen lesion diagnosis, radiotherapy planning, and follow-up.
Deep learning-based medical image segmentation has shown the potential to reduce manual delineation efforts, but it still requires a large-scale fine annotated dataset for training.
In this work, we establish a large-scale textitWhole abdominal textitORgans textitDataset (textitWORD) for algorithms research and clinical applications development.
- Score: 14.752924082744814
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Whole abdominal organs segmentation plays an important role in abdomen lesion
diagnosis, radiotherapy planning, and follow-up. However, delineating all
abdominal organs by oncologists manually is time-consuming and very expensive.
Recently, deep learning-based medical image segmentation has shown the
potential to reduce manual delineation efforts, but it still requires a
large-scale fine annotated dataset for training. Although many efforts in this
task, there are still few large image datasets covering the whole abdomen
region with accurate and detailed annotations for the whole abdominal organ
segmentation. In this work, we establish a large-scale \textit{W}hole abdominal
\textit{OR}gans \textit{D}ataset (\textit{WORD}) for algorithms research and
clinical applications development. This dataset contains 150 abdominal CT
volumes (30495 slices) and each volume has 16 organs with fine pixel-level
annotations and scribble-based sparse annotation, which may be the largest
dataset with whole abdominal organs annotation. Several state-of-the-art
segmentation methods are evaluated on this dataset. And, we also invited
clinical oncologists to revise the model predictions to measure the gap between
the deep learning method and real oncologists. We further introduce and
evaluate a new scribble-based weakly supervised segmentation on this dataset.
The work provided a new benchmark for the abdominal multi-organ segmentation
task and these experiments can serve as the baseline for future research and
clinical application development. The codebase and dataset will be released at:
https://github.com/HiLab-git/WORD
Related papers
- AbdomenAtlas: A Large-Scale, Detailed-Annotated, & Multi-Center Dataset for Efficient Transfer Learning and Open Algorithmic Benchmarking [16.524596737411006]
We introduce the largest abdominal CT dataset (termed AbdomenAtlas) of 20,460 three-dimensional CT volumes from 112 hospitals across diverse populations, geographies, and facilities.
AbamenAtlas provides 673K high-quality masks of anatomical structures in the abdominal region annotated by a team of 10 radiologists with the help of AI algorithms.
arXiv Detail & Related papers (2024-07-23T17:59:44Z) - Pelvic floor MRI segmentation based on semi-supervised deep learning [3.764963091541598]
Deep learning-enabled semantic segmentation has facilitated the three-dimensional geometric reconstruction of pelvic floor organs.
The task of labeling pelvic floor MRI segmentation is labor-intensive and costly, leading to a scarcity of labels.
Insufficient segmentation labels limit the precise segmentation and reconstruction of pelvic floor organs.
arXiv Detail & Related papers (2023-11-06T13:54:52Z) - Towards Unifying Anatomy Segmentation: Automated Generation of a
Full-body CT Dataset via Knowledge Aggregation and Anatomical Guidelines [113.08940153125616]
We generate a dataset of whole-body CT scans with $142$ voxel-level labels for 533 volumes providing comprehensive anatomical coverage.
Our proposed procedure does not rely on manual annotation during the label aggregation stage.
We release our trained unified anatomical segmentation model capable of predicting $142$ anatomical structures on CT data.
arXiv Detail & Related papers (2023-07-25T09:48:13Z) - HALOS: Hallucination-free Organ Segmentation after Organ Resection
Surgery [3.079885946230076]
State-of-the-art segmentation models often lead to organ hallucinations, i.e., false-positive predictions of organs.
We propose HALOS for abdominal organ segmentation in MR images that handles cases after organ resection surgery.
arXiv Detail & Related papers (2023-03-14T09:05:19Z) - Self-Supervised Correction Learning for Semi-Supervised Biomedical Image
Segmentation [84.58210297703714]
We propose a self-supervised correction learning paradigm for semi-supervised biomedical image segmentation.
We design a dual-task network, including a shared encoder and two independent decoders for segmentation and lesion region inpainting.
Experiments on three medical image segmentation datasets for different tasks demonstrate the outstanding performance of our method.
arXiv Detail & Related papers (2023-01-12T08:19:46Z) - Boundary-Aware Network for Abdominal Multi-Organ Segmentation [21.079667938055668]
We propose a boundary-aware network (BA-Net) to segment abdominal organs on CT scans and MRI scans.
The results demonstrate that BA-Net is superior to nnUNet on both segmentation tasks.
arXiv Detail & Related papers (2022-08-29T02:24:02Z) - Every Annotation Counts: Multi-label Deep Supervision for Medical Image
Segmentation [85.0078917060652]
We propose a semi-weakly supervised segmentation algorithm to overcome this barrier.
Our approach is based on a new formulation of deep supervision and student-teacher model.
With our novel training regime for segmentation that flexibly makes use of images that are either fully labeled, marked with bounding boxes, just global labels, or not at all, we are able to cut the requirement for expensive labels by 94.22%.
arXiv Detail & Related papers (2021-04-27T14:51:19Z) - Towards Robust Partially Supervised Multi-Structure Medical Image
Segmentation on Small-Scale Data [123.03252888189546]
We propose Vicinal Labels Under Uncertainty (VLUU) to bridge the methodological gaps in partially supervised learning (PSL) under data scarcity.
Motivated by multi-task learning and vicinal risk minimization, VLUU transforms the partially supervised problem into a fully supervised problem by generating vicinal labels.
Our research suggests a new research direction in label-efficient deep learning with partial supervision.
arXiv Detail & Related papers (2020-11-28T16:31:00Z) - AbdomenCT-1K: Is Abdominal Organ Segmentation A Solved Problem? [30.338209680140913]
This paper presents a large and diverse abdominal CT organ segmentation dataset, AbdomenCT-1K, with more than 1000 (1K) CT scans from 12 medical centers.
We conduct a large-scale study for liver, kidney, spleen, and pancreas segmentation and reveal the unsolved segmentation problems of the SOTA methods.
To advance the unsolved problems, we build four organ segmentation benchmarks for fully supervised, semi-supervised, weakly supervised, and continual learning.
arXiv Detail & Related papers (2020-10-28T08:15:27Z) - 3D medical image segmentation with labeled and unlabeled data using
autoencoders at the example of liver segmentation in CT images [58.720142291102135]
This work investigates the potential of autoencoder-extracted features to improve segmentation with a convolutional neural network.
A convolutional autoencoder was used to extract features from unlabeled data and a multi-scale, fully convolutional CNN was used to perform the target task of 3D liver segmentation in CT images.
arXiv Detail & Related papers (2020-03-17T20:20:43Z) - VerSe: A Vertebrae Labelling and Segmentation Benchmark for
Multi-detector CT Images [121.31355003451152]
Large Scale Vertebrae Challenge (VerSe) was organised in conjunction with the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) in 2019 and 2020.
We present the the results of this evaluation and further investigate the performance-variation at vertebra-level, scan-level, and at different fields-of-view.
arXiv Detail & Related papers (2020-01-24T21:09:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.