Unleashing the Strengths of Unlabeled Data in Pan-cancer Abdominal Organ
Quantification: the FLARE22 Challenge
- URL: http://arxiv.org/abs/2308.05862v1
- Date: Thu, 10 Aug 2023 21:51:48 GMT
- Title: Unleashing the Strengths of Unlabeled Data in Pan-cancer Abdominal Organ
Quantification: the FLARE22 Challenge
- Authors: Jun Ma, Yao Zhang, Song Gu, Cheng Ge, Shihao Ma, Adamo Young, Cheng
Zhu, Kangkang Meng, Xin Yang, Ziyan Huang, Fan Zhang, Wentao Liu, YuanKe Pan,
Shoujin Huang, Jiacheng Wang, Mingze Sun, Weixin Xu, Dengqiang Jia, Jae Won
Choi, Nat\'alia Alves, Bram de Wilde, Gregor Koehler, Yajun Wu, Manuel
Wiesenfarth, Qiongjie Zhu, Guoqiang Dong, Jian He, the FLARE Challenge
Consortium, and Bo Wang
- Abstract summary: We organized the FLARE 2022 Challenge, the largest abdominal organ analysis challenge to date, to benchmark fast, low-resource, accurate, annotation-efficient, and generalized AI algorithms.
We constructed an intercontinental and multinational dataset from more than 50 medical groups, including Computed Tomography (CT) scans with different races, diseases, phases, and manufacturers.
Best-performing algorithms successfully generalized to holdout external validation sets, achieving a median DSC of 89.5%, 90.9%, and 88.3% on North American, European, and Asian cohorts, respectively.
- Score: 18.48059187629883
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Quantitative organ assessment is an essential step in automated abdominal
disease diagnosis and treatment planning. Artificial intelligence (AI) has
shown great potential to automatize this process. However, most existing AI
algorithms rely on many expert annotations and lack a comprehensive evaluation
of accuracy and efficiency in real-world multinational settings. To overcome
these limitations, we organized the FLARE 2022 Challenge, the largest abdominal
organ analysis challenge to date, to benchmark fast, low-resource, accurate,
annotation-efficient, and generalized AI algorithms. We constructed an
intercontinental and multinational dataset from more than 50 medical groups,
including Computed Tomography (CT) scans with different races, diseases,
phases, and manufacturers. We independently validated that a set of AI
algorithms achieved a median Dice Similarity Coefficient (DSC) of 90.0\% by
using 50 labeled scans and 2000 unlabeled scans, which can significantly reduce
annotation requirements. The best-performing algorithms successfully
generalized to holdout external validation sets, achieving a median DSC of
89.5\%, 90.9\%, and 88.3\% on North American, European, and Asian cohorts,
respectively. They also enabled automatic extraction of key organ biology
features, which was labor-intensive with traditional manual measurements. This
opens the potential to use unlabeled data to boost performance and alleviate
annotation shortages for modern AI models.
Related papers
- Artificial Intelligence to Assess Dental Findings from Panoramic Radiographs -- A Multinational Study [3.8184255731311287]
We analyzed 6,669 dental panoramic radiographs (DPRs) from three data sets.
Performance metrics included sensitivity, specificity, and area under the receiver operating characteristic curve (AUC-ROC)
The AI system demonstrated comparable or superior performance to human readers.
arXiv Detail & Related papers (2025-02-14T16:34:21Z) - Multi-Class Segmentation of Aortic Branches and Zones in Computed Tomography Angiography: The AortaSeg24 Challenge [55.252714550918824]
AortaSeg24 MICCAI Challenge introduced the first dataset of 100 CTA volumes annotated for 23 clinically relevant aortic branches and zones.
This paper presents the challenge design, dataset details, evaluation metrics, and an in-depth analysis of the top-performing algorithms.
arXiv Detail & Related papers (2025-02-07T21:09:05Z) - ScaleMAI: Accelerating the Development of Trusted Datasets and AI Models [46.80682547774335]
We propose ScaleMAI, an agent of AI-integrated data curation and annotation.
First, ScaleMAI creates a dataset of 25,362 CT scans, including per-voxel annotations for benign/malignant tumors and 24 anatomical structures.
Second, through progressive human-in-the-loop iterations, ScaleMAI provides Flagship AI Model that can approach the proficiency of expert annotators in detecting pancreatic tumors.
arXiv Detail & Related papers (2025-01-06T22:12:00Z) - Leveraging AI for Automatic Classification of PCOS Using Ultrasound Imaging [0.0]
The AUTO-PCOS Classification Challenge seeks to advance the diagnostic capabilities of artificial intelligence (AI) in identifying Polycystic Ovary Syndrome (PCOS)
This report outlines our methodology for building a robust AI pipeline utilizing transfer learning with the InceptionV3 architecture to achieve high accuracy in binary classification.
arXiv Detail & Related papers (2024-12-30T11:56:11Z) - AbdomenAtlas: A Large-Scale, Detailed-Annotated, & Multi-Center Dataset for Efficient Transfer Learning and Open Algorithmic Benchmarking [16.524596737411006]
We introduce the largest abdominal CT dataset (termed AbdomenAtlas) of 20,460 three-dimensional CT volumes from 112 hospitals across diverse populations, geographies, and facilities.
AbamenAtlas provides 673K high-quality masks of anatomical structures in the abdominal region annotated by a team of 10 radiologists with the help of AI algorithms.
arXiv Detail & Related papers (2024-07-23T17:59:44Z) - A Robust Ensemble Algorithm for Ischemic Stroke Lesion Segmentation: Generalizability and Clinical Utility Beyond the ISLES Challenge [30.611482996378683]
Image and disease variability hinder the development of generalizable AI algorithms with clinical value.
We present a novel ensemble algorithm derived from the 2022 Ischemic Stroke Lesion (ISLES) challenge.
We combined top-performing algorithms into an ensemble model that overcomes the limitations of individual solutions.
arXiv Detail & Related papers (2024-03-28T13:56:26Z) - Towards Unifying Anatomy Segmentation: Automated Generation of a
Full-body CT Dataset via Knowledge Aggregation and Anatomical Guidelines [113.08940153125616]
We generate a dataset of whole-body CT scans with $142$ voxel-level labels for 533 volumes providing comprehensive anatomical coverage.
Our proposed procedure does not rely on manual annotation during the label aggregation stage.
We release our trained unified anatomical segmentation model capable of predicting $142$ anatomical structures on CT data.
arXiv Detail & Related papers (2023-07-25T09:48:13Z) - WSSS4LUAD: Grand Challenge on Weakly-supervised Tissue Semantic
Segmentation for Lung Adenocarcinoma [51.50991881342181]
This challenge includes 10,091 patch-level annotations and over 130 million labeled pixels.
First place team achieved mIoU of 0.8413 (tumor: 0.8389, stroma: 0.7931, normal: 0.8919)
arXiv Detail & Related papers (2022-04-13T15:27:05Z) - Advancing COVID-19 Diagnosis with Privacy-Preserving Collaboration in
Artificial Intelligence [79.038671794961]
We launch the Unified CT-COVID AI Diagnostic Initiative (UCADI), where the AI model can be distributedly trained and independently executed at each host institution.
Our study is based on 9,573 chest computed tomography scans (CTs) from 3,336 patients collected from 23 hospitals located in China and the UK.
arXiv Detail & Related papers (2021-11-18T00:43:41Z) - Weak labels and anatomical knowledge: making deep learning practical for
intracranial aneurysm detection in TOF-MRA [0.0]
We develop a fully automated, deep neural network that is trained utilizing oversized weak labels.
Our network achieves an average sensitivity of 77% on our in-house data, with a mean False Positive (FP) rate of 0.72 per patient.
arXiv Detail & Related papers (2021-03-10T16:31:54Z) - VerSe: A Vertebrae Labelling and Segmentation Benchmark for
Multi-detector CT Images [121.31355003451152]
Large Scale Vertebrae Challenge (VerSe) was organised in conjunction with the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) in 2019 and 2020.
We present the the results of this evaluation and further investigate the performance-variation at vertebra-level, scan-level, and at different fields-of-view.
arXiv Detail & Related papers (2020-01-24T21:09:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.