Multi-organ Segmentation via Co-training Weight-averaged Models from
Few-organ Datasets
- URL: http://arxiv.org/abs/2008.07149v1
- Date: Mon, 17 Aug 2020 08:39:16 GMT
- Title: Multi-organ Segmentation via Co-training Weight-averaged Models from
Few-organ Datasets
- Authors: Rui Huang, Yuanjie Zheng, Zhiqiang Hu, Shaoting Zhang, Hongsheng Li
- Abstract summary: We propose to co-train weight-averaged models for learning a unified multi-organ segmentation network from few-organ datasets.
To alleviate the noisy teaching supervisions between the networks, the weighted-averaged models are adopted to produce more reliable soft labels.
- Score: 45.14004510709325
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-organ segmentation has extensive applications in many clinical
applications. To segment multiple organs of interest, it is generally quite
difficult to collect full annotations of all the organs on the same images, as
some medical centers might only annotate a portion of the organs due to their
own clinical practice. In most scenarios, one might obtain annotations of a
single or a few organs from one training set, and obtain annotations of the the
other organs from another set of training images. Existing approaches mostly
train and deploy a single model for each subset of organs, which are memory
intensive and also time inefficient. In this paper, we propose to co-train
weight-averaged models for learning a unified multi-organ segmentation network
from few-organ datasets. We collaboratively train two networks and let the
coupled networks teach each other on un-annotated organs. To alleviate the
noisy teaching supervisions between the networks, the weighted-averaged models
are adopted to produce more reliable soft labels. In addition, a novel region
mask is utilized to selectively apply the consistent constraint on the
un-annotated organ regions that require collaborative teaching, which further
boosts the performance. Extensive experiments on three public available
single-organ datasets LiTS, KiTS, Pancreas and manually-constructed
single-organ datasets from MOBA show that our method can better utilize the
few-organ datasets and achieves superior performance with less inference
computational cost.
Related papers
- Learnable Weight Initialization for Volumetric Medical Image Segmentation [66.3030435676252]
We propose a learnable weight-based hybrid medical image segmentation approach.
Our approach is easy to integrate into any hybrid model and requires no external training data.
Experiments on multi-organ and lung cancer segmentation tasks demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2023-06-15T17:55:05Z) - Tailored Multi-Organ Segmentation with Model Adaptation and Ensemble [22.82094545786408]
Multi-organ segmentation is a fundamental task in medical image analysis.
Due to expensive labor costs and expertise, the availability of multi-organ annotations is usually limited.
We propose a novel dual-stage method that consists of a Model Adaptation stage and a Model Ensemble stage.
arXiv Detail & Related papers (2023-04-14T13:39:39Z) - Multi-site Organ Segmentation with Federated Partial Supervision and
Site Adaptation [14.039141830423182]
The paper aims to tackle these challenges via a two-phase aggregation-then-adaptation approach.
The first phase of aggregation learns a single multi-organ segmentation model by leveraging the strength of 'bigger data'
The second phase of site adaptation is to transfer the federated multi-organ segmentation model to site-specific organ segmentation models, one model per site, in order to further improve the performance of each site's organ segmentation task.
arXiv Detail & Related papers (2023-02-08T07:07:43Z) - Learning from partially labeled data for multi-organ and tumor
segmentation [102.55303521877933]
We propose a Transformer based dynamic on-demand network (TransDoDNet) that learns to segment organs and tumors on multiple datasets.
A dynamic head enables the network to accomplish multiple segmentation tasks flexibly.
We create a large-scale partially labeled Multi-Organ and Tumor benchmark, termed MOTS, and demonstrate the superior performance of our TransDoDNet over other competitors.
arXiv Detail & Related papers (2022-11-13T13:03:09Z) - MS-KD: Multi-Organ Segmentation with Multiple Binary-Labeled Datasets [20.60290563940572]
This paper investigates how to learn a multi-organ segmentation model leveraging a set of binary-labeled datasets.
A novel Multi-teacher Single-student Knowledge Distillation (MS-KD) framework is proposed.
arXiv Detail & Related papers (2021-08-05T12:29:26Z) - Generalized Organ Segmentation by Imitating One-shot Reasoning using
Anatomical Correlation [55.1248480381153]
We propose OrganNet which learns a generalized organ concept from a set of annotated organ classes and then transfer this concept to unseen classes.
We show that OrganNet can effectively resist the wide variations in organ morphology and produce state-of-the-art results in one-shot segmentation task.
arXiv Detail & Related papers (2021-03-30T13:41:12Z) - Incremental Learning for Multi-organ Segmentation with Partially Labeled
Datasets [8.370590211748087]
We learn a multi-organ segmentation model through incremental learning (IL)
In each IL stage, we lose access to the previous annotations, whose knowledge is assumingly captured by the current model.
We learn to update the organ segmentation model to include the new organs.
arXiv Detail & Related papers (2021-03-08T03:15:59Z) - Few-shot Medical Image Segmentation using a Global Correlation Network
with Discriminative Embedding [60.89561661441736]
We propose a novel method for few-shot medical image segmentation.
We construct our few-shot image segmentor using a deep convolutional network trained episodically.
We enhance discriminability of deep embedding to encourage clustering of the feature domains of the same class.
arXiv Detail & Related papers (2020-12-10T04:01:07Z) - Towards Robust Partially Supervised Multi-Structure Medical Image
Segmentation on Small-Scale Data [123.03252888189546]
We propose Vicinal Labels Under Uncertainty (VLUU) to bridge the methodological gaps in partially supervised learning (PSL) under data scarcity.
Motivated by multi-task learning and vicinal risk minimization, VLUU transforms the partially supervised problem into a fully supervised problem by generating vicinal labels.
Our research suggests a new research direction in label-efficient deep learning with partial supervision.
arXiv Detail & Related papers (2020-11-28T16:31:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.