Multi-Modality Abdominal Multi-Organ Segmentation with Deep Supervised
3D Segmentation Model
- URL: http://arxiv.org/abs/2208.12041v1
- Date: Wed, 24 Aug 2022 03:37:54 GMT
- Title: Multi-Modality Abdominal Multi-Organ Segmentation with Deep Supervised
3D Segmentation Model
- Authors: Satoshi Kondo, Satoshi Kasai
- Abstract summary: We present our solution for the AMOS 2022 challenge.
We employ residual U-Net with deep super vision as our base model.
The experimental results show that the mean scores of Dice similarity coefficient and normalized surface dice are 0.8504 and 0.8476 for CT only task and CT/MRI task, respectively.
- Score: 0.12183405753834559
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: To promote the development of medical image segmentation technology, AMOS, a
large-scale abdominal multi-organ dataset for versatile medical image
segmentation, is provided and AMOS 2022 challenge is held by using the dataset.
In this report, we present our solution for the AMOS 2022 challenge. We employ
residual U-Net with deep super vision as our base model. The experimental
results show that the mean scores of Dice similarity coefficient and normalized
surface dice are 0.8504 and 0.8476 for CT only task and CT/MRI task,
respectively.
Related papers
- MOSMOS: Multi-organ segmentation facilitated by medical report supervision [10.396987980136602]
We propose a novel pre-training & fine-tuning framework for Multi-Organ Supervision (MOS)
Specifically, we first introduce global contrastive learning to align medical image-report pairs in the pre-training stage.
To remedy the discrepancy, we further leverage multi-label recognition to implicitly learn the semantic correspondence between image pixels and organ tags.
arXiv Detail & Related papers (2024-09-04T03:46:17Z) - SAM-Med3D-MoE: Towards a Non-Forgetting Segment Anything Model via Mixture of Experts for 3D Medical Image Segmentation [36.95030121663565]
Supervised Finetuning (SFT) serves as an effective way to adapt foundation models for task-specific downstream tasks.
We propose SAM-Med3D-MoE, a novel framework that seamlessly integrates task-specific finetuned models with the foundational model.
Our experiments demonstrate the efficacy of SAM-Med3D-MoE, with an average Dice performance increase from 53 to 56.4 on 15 specific classes.
arXiv Detail & Related papers (2024-07-06T03:03:45Z) - TotalSegmentator MRI: Sequence-Independent Segmentation of 59 Anatomical Structures in MR images [62.53931644063323]
In this study we extended the capabilities of TotalSegmentator to MR images.
We trained an nnU-Net segmentation algorithm on this dataset and calculated similarity coefficients (Dice) to evaluate the model's performance.
The model significantly outperformed two other publicly available segmentation models (Dice score 0.824 versus 0.762; p0.001 and 0.762 versus 0.542; p)
arXiv Detail & Related papers (2024-05-29T20:15:54Z) - RadGenome-Chest CT: A Grounded Vision-Language Dataset for Chest CT Analysis [56.57177181778517]
RadGenome-Chest CT is a large-scale, region-guided 3D chest CT interpretation dataset based on CT-RATE.
We leverage the latest powerful universal segmentation and large language models to extend the original datasets.
arXiv Detail & Related papers (2024-04-25T17:11:37Z) - QUBIQ: Uncertainty Quantification for Biomedical Image Segmentation Challenge [93.61262892578067]
Uncertainty in medical image segmentation tasks, especially inter-rater variability, presents a significant challenge.
This variability directly impacts the development and evaluation of automated segmentation algorithms.
We report the set-up and summarize the benchmark results of the Quantification of Uncertainties in Biomedical Image Quantification Challenge (QUBIQ)
arXiv Detail & Related papers (2024-03-19T17:57:24Z) - Image-level supervision and self-training for transformer-based
cross-modality tumor segmentation [2.29206349318258]
We propose a new semi-supervised training strategy called MoDATTS.
MoDATTS is designed for accurate cross-modality 3D tumor segmentation on unpaired bi-modal datasets.
We report that 99% and 100% of this maximum performance can be attained if 20% and 50% of the target data is annotated.
arXiv Detail & Related papers (2023-09-17T11:50:12Z) - MA-SAM: Modality-agnostic SAM Adaptation for 3D Medical Image
Segmentation [58.53672866662472]
We introduce a modality-agnostic SAM adaptation framework, named as MA-SAM.
Our method roots in the parameter-efficient fine-tuning strategy to update only a small portion of weight increments.
By injecting a series of 3D adapters into the transformer blocks of the image encoder, our method enables the pre-trained 2D backbone to extract third-dimensional information from input data.
arXiv Detail & Related papers (2023-09-16T02:41:53Z) - LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical
Imaging via Second-order Graph Matching [59.01894976615714]
We introduce LVM-Med, the first family of deep networks trained on large-scale medical datasets.
We have collected approximately 1.3 million medical images from 55 publicly available datasets.
LVM-Med empirically outperforms a number of state-of-the-art supervised, self-supervised, and foundation models.
arXiv Detail & Related papers (2023-06-20T22:21:34Z) - Generalist Vision Foundation Models for Medical Imaging: A Case Study of
Segment Anything Model on Zero-Shot Medical Segmentation [5.547422331445511]
We report quantitative and qualitative zero-shot segmentation results on nine medical image segmentation benchmarks.
Our study indicates the versatility of generalist vision foundation models on medical imaging.
arXiv Detail & Related papers (2023-04-25T08:07:59Z) - AMOS: A Large-Scale Abdominal Multi-Organ Benchmark for Versatile
Medical Image Segmentation [32.938687630678096]
AMOS is a large-scale, diverse, clinical dataset for abdominal organ segmentation.
It provides challenging examples and test-bed for studying robust segmentation algorithms under diverse targets and scenarios.
We benchmark several state-of-the-art medical segmentation models to evaluate the status of the existing methods on this new challenging dataset.
arXiv Detail & Related papers (2022-06-16T09:27:56Z) - Modality Completion via Gaussian Process Prior Variational Autoencoders
for Multi-Modal Glioma Segmentation [75.58395328700821]
We propose a novel model, Multi-modal Gaussian Process Prior Variational Autoencoder (MGP-VAE), to impute one or more missing sub-modalities for a patient scan.
MGP-VAE can leverage the Gaussian Process (GP) prior on the Variational Autoencoder (VAE) to utilize the subjects/patients and sub-modalities correlations.
We show the applicability of MGP-VAE on brain tumor segmentation where either, two, or three of four sub-modalities may be missing.
arXiv Detail & Related papers (2021-07-07T19:06:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.