AMOS: A Large-Scale Abdominal Multi-Organ Benchmark for Versatile
Medical Image Segmentation
- URL: http://arxiv.org/abs/2206.08023v1
- Date: Thu, 16 Jun 2022 09:27:56 GMT
- Title: AMOS: A Large-Scale Abdominal Multi-Organ Benchmark for Versatile
Medical Image Segmentation
- Authors: Yuanfeng Ji, Haotian Bai, Jie Yang, Chongjian Ge, Ye Zhu, Ruimao
Zhang, Zhen Li, Lingyan Zhang, Wanling Ma, Xiang Wan, Ping Luo
- Abstract summary: AMOS is a large-scale, diverse, clinical dataset for abdominal organ segmentation.
It provides challenging examples and test-bed for studying robust segmentation algorithms under diverse targets and scenarios.
We benchmark several state-of-the-art medical segmentation models to evaluate the status of the existing methods on this new challenging dataset.
- Score: 32.938687630678096
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Despite the considerable progress in automatic abdominal multi-organ
segmentation from CT/MRI scans in recent years, a comprehensive evaluation of
the models' capabilities is hampered by the lack of a large-scale benchmark
from diverse clinical scenarios. Constraint by the high cost of collecting and
labeling 3D medical data, most of the deep learning models to date are driven
by datasets with a limited number of organs of interest or samples, which still
limits the power of modern deep models and makes it difficult to provide a
fully comprehensive and fair estimate of various methods. To mitigate the
limitations, we present AMOS, a large-scale, diverse, clinical dataset for
abdominal organ segmentation. AMOS provides 500 CT and 100 MRI scans collected
from multi-center, multi-vendor, multi-modality, multi-phase, multi-disease
patients, each with voxel-level annotations of 15 abdominal organs, providing
challenging examples and test-bed for studying robust segmentation algorithms
under diverse targets and scenarios. We further benchmark several
state-of-the-art medical segmentation models to evaluate the status of the
existing methods on this new challenging dataset. We have made our datasets,
benchmark servers, and baselines publicly available, and hope to inspire future
research. Information can be found at https://amos22.grand-challenge.org.
Related papers
- SegHeD: Segmentation of Heterogeneous Data for Multiple Sclerosis Lesions with Anatomical Constraints [1.498084483844508]
Machine learning models have demonstrated a great potential for automated MS lesion segmentation.
SegHeD is a novel multi-dataset multi-task segmentation model that can incorporate heterogeneous data as input.
SegHeD is assessed on five MS datasets and achieves a high performance in all, new, and vanishing-lesion segmentation.
arXiv Detail & Related papers (2024-10-02T17:21:43Z) - MedCLIP-SAMv2: Towards Universal Text-Driven Medical Image Segmentation [2.2585213273821716]
We introduce MedCLIP-SAMv2, a novel framework that integrates the CLIP and SAM models to perform segmentation on clinical scans.
Our approach includes fine-tuning the BiomedCLIP model with a new Decoupled Hard Negative Noise Contrastive Estimation (DHN-NCE) loss.
We also investigate using zero-shot segmentation labels within a weakly supervised paradigm to enhance segmentation quality further.
arXiv Detail & Related papers (2024-09-28T23:10:37Z) - MOSMOS: Multi-organ segmentation facilitated by medical report supervision [10.396987980136602]
We propose a novel pre-training & fine-tuning framework for Multi-Organ Supervision (MOS)
Specifically, we first introduce global contrastive learning to align medical image-report pairs in the pre-training stage.
To remedy the discrepancy, we further leverage multi-label recognition to implicitly learn the semantic correspondence between image pixels and organ tags.
arXiv Detail & Related papers (2024-09-04T03:46:17Z) - Towards a clinically accessible radiology foundation model: open-access and lightweight, with automated evaluation [113.5002649181103]
Training open-source small multimodal models (SMMs) to bridge competency gaps for unmet clinical needs in radiology.
For training, we assemble a large dataset of over 697 thousand radiology image-text pairs.
For evaluation, we propose CheXprompt, a GPT-4-based metric for factuality evaluation, and demonstrate its parity with expert evaluation.
The inference of LlaVA-Rad is fast and can be performed on a single V100 GPU in private settings, offering a promising state-of-the-art tool for real-world clinical applications.
arXiv Detail & Related papers (2024-03-12T18:12:02Z) - Multi-Modality Abdominal Multi-Organ Segmentation with Deep Supervised
3D Segmentation Model [0.12183405753834559]
We present our solution for the AMOS 2022 challenge.
We employ residual U-Net with deep super vision as our base model.
The experimental results show that the mean scores of Dice similarity coefficient and normalized surface dice are 0.8504 and 0.8476 for CT only task and CT/MRI task, respectively.
arXiv Detail & Related papers (2022-08-24T03:37:54Z) - G-MIND: An End-to-End Multimodal Imaging-Genetics Framework for
Biomarker Identification and Disease Classification [49.53651166356737]
We propose a novel deep neural network architecture to integrate imaging and genetics data, as guided by diagnosis, that provides interpretable biomarkers.
We have evaluated our model on a population study of schizophrenia that includes two functional MRI (fMRI) paradigms and Single Nucleotide Polymorphism (SNP) data.
arXiv Detail & Related papers (2021-01-27T19:28:04Z) - Towards Robust Partially Supervised Multi-Structure Medical Image
Segmentation on Small-Scale Data [123.03252888189546]
We propose Vicinal Labels Under Uncertainty (VLUU) to bridge the methodological gaps in partially supervised learning (PSL) under data scarcity.
Motivated by multi-task learning and vicinal risk minimization, VLUU transforms the partially supervised problem into a fully supervised problem by generating vicinal labels.
Our research suggests a new research direction in label-efficient deep learning with partial supervision.
arXiv Detail & Related papers (2020-11-28T16:31:00Z) - AbdomenCT-1K: Is Abdominal Organ Segmentation A Solved Problem? [30.338209680140913]
This paper presents a large and diverse abdominal CT organ segmentation dataset, AbdomenCT-1K, with more than 1000 (1K) CT scans from 12 medical centers.
We conduct a large-scale study for liver, kidney, spleen, and pancreas segmentation and reveal the unsolved segmentation problems of the SOTA methods.
To advance the unsolved problems, we build four organ segmentation benchmarks for fully supervised, semi-supervised, weakly supervised, and continual learning.
arXiv Detail & Related papers (2020-10-28T08:15:27Z) - Cross-Modal Information Maximization for Medical Imaging: CMIM [62.28852442561818]
In hospitals, data are siloed to specific information systems that make the same information available under different modalities.
This offers unique opportunities to obtain and use at train-time those multiple views of the same information that might not always be available at test-time.
We propose an innovative framework that makes the most of available data by learning good representations of a multi-modal input that are resilient to modality dropping at test-time.
arXiv Detail & Related papers (2020-10-20T20:05:35Z) - Co-Heterogeneous and Adaptive Segmentation from Multi-Source and
Multi-Phase CT Imaging Data: A Study on Pathological Liver and Lesion
Segmentation [48.504790189796836]
We present a novel segmentation strategy, co-heterogenous and adaptive segmentation (CHASe)
We propose a versatile framework that fuses appearance based semi-supervision, mask based adversarial domain adaptation, and pseudo-labeling.
CHASe can further improve pathological liver mask Dice-Sorensen coefficients by ranges of $4.2% sim 9.4%$.
arXiv Detail & Related papers (2020-05-27T06:58:39Z) - MS-Net: Multi-Site Network for Improving Prostate Segmentation with
Heterogeneous MRI Data [75.73881040581767]
We propose a novel multi-site network (MS-Net) for improving prostate segmentation by learning robust representations.
Our MS-Net improves the performance across all datasets consistently, and outperforms state-of-the-art methods for multi-site learning.
arXiv Detail & Related papers (2020-02-09T14:11:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.