AnatoMix: Anatomy-aware Data Augmentation for Multi-organ Segmentation
- URL: http://arxiv.org/abs/2403.03326v1
- Date: Tue, 5 Mar 2024 21:07:50 GMT
- Title: AnatoMix: Anatomy-aware Data Augmentation for Multi-organ Segmentation
- Authors: Chang Liu, Fuxin Fan, Annette Schwarz, Andreas Maier
- Abstract summary: We propose a novel data augmentation strategy for increasing the generalizibility of multi-organ segmentation datasets.
By object-level matching and manipulation, our method is able to generate new images with correct anatomy.
Our augmentation method can lead to mean dice of 76.1, compared with 74.8 of the baseline method.
- Score: 6.471203541258319
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multi-organ segmentation in medical images is a widely researched task and
can save much manual efforts of clinicians in daily routines. Automating the
organ segmentation process using deep learning (DL) is a promising solution and
state-of-the-art segmentation models are achieving promising accuracy. In this
work, We proposed a novel data augmentation strategy for increasing the
generalizibility of multi-organ segmentation datasets, namely AnatoMix. By
object-level matching and manipulation, our method is able to generate new
images with correct anatomy, i.e. organ segmentation mask, exponentially
increasing the size of the segmentation dataset. Initial experiments have been
done to investigate the segmentation performance influenced by our method on a
public CT dataset. Our augmentation method can lead to mean dice of 76.1,
compared with 74.8 of the baseline method.
Related papers
- A Novel Momentum-Based Deep Learning Techniques for Medical Image Classification and Segmentation [3.268679466097746]
Accurately segmenting different organs from medical images is a critical prerequisite for computer-assisted diagnosis and intervention planning.
This study proposes a deep learning-based approach for segmenting various organs from CT and MRI scans and classifying diseases.
arXiv Detail & Related papers (2024-08-11T04:12:35Z) - Tailored Multi-Organ Segmentation with Model Adaptation and Ensemble [22.82094545786408]
Multi-organ segmentation is a fundamental task in medical image analysis.
Due to expensive labor costs and expertise, the availability of multi-organ annotations is usually limited.
We propose a novel dual-stage method that consists of a Model Adaptation stage and a Model Ensemble stage.
arXiv Detail & Related papers (2023-04-14T13:39:39Z) - Learning from partially labeled data for multi-organ and tumor
segmentation [102.55303521877933]
We propose a Transformer based dynamic on-demand network (TransDoDNet) that learns to segment organs and tumors on multiple datasets.
A dynamic head enables the network to accomplish multiple segmentation tasks flexibly.
We create a large-scale partially labeled Multi-Organ and Tumor benchmark, termed MOTS, and demonstrate the superior performance of our TransDoDNet over other competitors.
arXiv Detail & Related papers (2022-11-13T13:03:09Z) - Data variation-aware medical image segmentation [0.0]
We propose an approach that improves on our previous work in this area.
In experiments with a real clinical dataset of CT scans with prostate segmentations, our approach provides an improvement of several percentage points in terms of Dice and surface Dice coefficients.
arXiv Detail & Related papers (2022-02-24T13:35:34Z) - Generalized Organ Segmentation by Imitating One-shot Reasoning using
Anatomical Correlation [55.1248480381153]
We propose OrganNet which learns a generalized organ concept from a set of annotated organ classes and then transfer this concept to unseen classes.
We show that OrganNet can effectively resist the wide variations in organ morphology and produce state-of-the-art results in one-shot segmentation task.
arXiv Detail & Related papers (2021-03-30T13:41:12Z) - Co-Generation and Segmentation for Generalized Surgical Instrument
Segmentation on Unlabelled Data [49.419268399590045]
Surgical instrument segmentation for robot-assisted surgery is needed for accurate instrument tracking and augmented reality overlays.
Deep learning-based methods have shown state-of-the-art performance for surgical instrument segmentation, but their results depend on labelled data.
In this paper, we demonstrate the limited generalizability of these methods on different datasets, including human robot-assisted surgeries.
arXiv Detail & Related papers (2021-03-16T18:41:18Z) - Towards Robust Partially Supervised Multi-Structure Medical Image
Segmentation on Small-Scale Data [123.03252888189546]
We propose Vicinal Labels Under Uncertainty (VLUU) to bridge the methodological gaps in partially supervised learning (PSL) under data scarcity.
Motivated by multi-task learning and vicinal risk minimization, VLUU transforms the partially supervised problem into a fully supervised problem by generating vicinal labels.
Our research suggests a new research direction in label-efficient deep learning with partial supervision.
arXiv Detail & Related papers (2020-11-28T16:31:00Z) - Towards Cross-modality Medical Image Segmentation with Online Mutual
Knowledge Distillation [71.89867233426597]
In this paper, we aim to exploit the prior knowledge learned from one modality to improve the segmentation performance on another modality.
We propose a novel Mutual Knowledge Distillation scheme to thoroughly exploit the modality-shared knowledge.
Experimental results on the public multi-class cardiac segmentation data, i.e., MMWHS 2017, show that our method achieves large improvements on CT segmentation.
arXiv Detail & Related papers (2020-10-04T10:25:13Z) - Bayesian Generative Models for Knowledge Transfer in MRI Semantic
Segmentation Problems [15.24006130659201]
We propose a knowledge transfer method between diseases via the Generative Bayesian Prior network.
Our approach is compared to a pre-train approach and random initialization and obtains the best results in terms of Dice Similarity Coefficient metric for the small subsets of the Brain Tumor- 2018 database.
arXiv Detail & Related papers (2020-05-26T11:42:17Z) - Automatic Data Augmentation via Deep Reinforcement Learning for
Effective Kidney Tumor Segmentation [57.78765460295249]
We develop a novel automatic learning-based data augmentation method for medical image segmentation.
In our method, we innovatively combine the data augmentation module and the subsequent segmentation module in an end-to-end training manner with a consistent loss.
We extensively evaluated our method on CT kidney tumor segmentation which validated the promising results of our method.
arXiv Detail & Related papers (2020-02-22T14:10:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.