MultiTalent: A Multi-Dataset Approach to Medical Image Segmentation
- URL: http://arxiv.org/abs/2303.14444v2
- Date: Tue, 19 Sep 2023 14:03:10 GMT
- Title: MultiTalent: A Multi-Dataset Approach to Medical Image Segmentation
- Authors: Constantin Ulrich, Fabian Isensee, Tassilo Wald, Maximilian Zenk,
Michael Baumgartner and Klaus H. Maier-Hein
- Abstract summary: Current practices limit model training and supervised pre-training to one or a few similar datasets.
We propose MultiTalent, a method that leverages multiple CT datasets with diverse and conflicting class definitions.
- Score: 1.146419670457951
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The medical imaging community generates a wealth of datasets, many of which
are openly accessible and annotated for specific diseases and tasks such as
multi-organ or lesion segmentation. Current practices continue to limit model
training and supervised pre-training to one or a few similar datasets,
neglecting the synergistic potential of other available annotated data. We
propose MultiTalent, a method that leverages multiple CT datasets with diverse
and conflicting class definitions to train a single model for a comprehensive
structure segmentation. Our results demonstrate improved segmentation
performance compared to previous related approaches, systematically, also
compared to single dataset training using state-of-the-art methods, especially
for lesion segmentation and other challenging structures. We show that
MultiTalent also represents a powerful foundation model that offers a superior
pre-training for various segmentation tasks compared to commonly used
supervised or unsupervised pre-training baselines. Our findings offer a new
direction for the medical imaging community to effectively utilize the wealth
of available data for improved segmentation performance. The code and model
weights will be published here: [tba]
Related papers
- PMT: Progressive Mean Teacher via Exploring Temporal Consistency for Semi-Supervised Medical Image Segmentation [51.509573838103854]
We propose a semi-supervised learning framework, termed Progressive Mean Teachers (PMT), for medical image segmentation.
Our PMT generates high-fidelity pseudo labels by learning robust and diverse features in the training process.
Experimental results on two datasets with different modalities, i.e., CT and MRI, demonstrate that our method outperforms the state-of-the-art medical image segmentation approaches.
arXiv Detail & Related papers (2024-09-08T15:02:25Z) - Versatile Medical Image Segmentation Learned from Multi-Source Datasets via Model Self-Disambiguation [9.068045557591612]
We propose a cost-effective alternative that harnesses multi-source data with only partial or sparse segmentation labels for training.
We devise strategies for model self-disambiguation, prior knowledge incorporation, and imbalance mitigation to tackle challenges associated with inconsistently labeled multi-source data.
arXiv Detail & Related papers (2023-11-17T18:28:32Z) - Learnable Weight Initialization for Volumetric Medical Image Segmentation [66.3030435676252]
We propose a learnable weight-based hybrid medical image segmentation approach.
Our approach is easy to integrate into any hybrid model and requires no external training data.
Experiments on multi-organ and lung cancer segmentation tasks demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2023-06-15T17:55:05Z) - Tailored Multi-Organ Segmentation with Model Adaptation and Ensemble [22.82094545786408]
Multi-organ segmentation is a fundamental task in medical image analysis.
Due to expensive labor costs and expertise, the availability of multi-organ annotations is usually limited.
We propose a novel dual-stage method that consists of a Model Adaptation stage and a Model Ensemble stage.
arXiv Detail & Related papers (2023-04-14T13:39:39Z) - Ambiguous Medical Image Segmentation using Diffusion Models [60.378180265885945]
We introduce a single diffusion model-based approach that produces multiple plausible outputs by learning a distribution over group insights.
Our proposed model generates a distribution of segmentation masks by leveraging the inherent sampling process of diffusion.
Comprehensive results show that our proposed approach outperforms existing state-of-the-art ambiguous segmentation networks.
arXiv Detail & Related papers (2023-04-10T17:58:22Z) - Understanding the Tricks of Deep Learning in Medical Image Segmentation:
Challenges and Future Directions [66.40971096248946]
In this paper, we collect a series of MedISeg tricks for different model implementation phases.
We experimentally explore the effectiveness of these tricks on consistent baselines.
We also open-sourced a strong MedISeg repository, where each component has the advantage of plug-and-play.
arXiv Detail & Related papers (2022-09-21T12:30:05Z) - Generalized Multi-Task Learning from Substantially Unlabeled
Multi-Source Medical Image Data [11.061381376559053]
MultiMix is a new multi-task learning model that jointly learns disease classification and anatomical segmentation in a semi-supervised manner.
Our experiments with varying quantities of multi-source labeled data in the training sets confirm the effectiveness of MultiMix.
arXiv Detail & Related papers (2021-10-25T18:09:19Z) - Multi-task Semi-supervised Learning for Pulmonary Lobe Segmentation [2.8016091833446617]
Pulmonary lobe segmentation is an important preprocessing task for the analysis of lung diseases.
Deep learning based methods can outperform these traditional approaches.
Deep multi-task learning is expected to utilize labels of multiple different structures.
arXiv Detail & Related papers (2021-04-22T12:33:30Z) - Towards Robust Partially Supervised Multi-Structure Medical Image
Segmentation on Small-Scale Data [123.03252888189546]
We propose Vicinal Labels Under Uncertainty (VLUU) to bridge the methodological gaps in partially supervised learning (PSL) under data scarcity.
Motivated by multi-task learning and vicinal risk minimization, VLUU transforms the partially supervised problem into a fully supervised problem by generating vicinal labels.
Our research suggests a new research direction in label-efficient deep learning with partial supervision.
arXiv Detail & Related papers (2020-11-28T16:31:00Z) - Unpaired Multi-modal Segmentation via Knowledge Distillation [77.39798870702174]
We propose a novel learning scheme for unpaired cross-modality image segmentation.
In our method, we heavily reuse network parameters, by sharing all convolutional kernels across CT and MRI.
We have extensively validated our approach on two multi-class segmentation problems.
arXiv Detail & Related papers (2020-01-06T20:03:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.