Continual Segment: Towards a Single, Unified and Accessible Continual
Segmentation Model of 143 Whole-body Organs in CT Scans
- URL: http://arxiv.org/abs/2302.00162v4
- Date: Sun, 3 Sep 2023 20:25:27 GMT
- Title: Continual Segment: Towards a Single, Unified and Accessible Continual
Segmentation Model of 143 Whole-body Organs in CT Scans
- Authors: Zhanghexuan Ji, Dazhou Guo, Puyang Wang, Ke Yan, Le Lu, Minfeng Xu,
Jingren Zhou, Qifeng Wang, Jia Ge, Mingchen Gao, Xianghua Ye, Dakai Jin
- Abstract summary: We propose a new architectural CSS learning framework to learn a single deep segmentation model for segmenting a total of 143 whole-body organs.
We trained and validated on 3D CT scans of 2500+ patients from four datasets, our single network can segment total 143 whole-body organs with very high accuracy.
- Score: 31.388497540849297
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Deep learning empowers the mainstream medical image segmentation methods.
Nevertheless current deep segmentation approaches are not capable of
efficiently and effectively adapting and updating the trained models when new
incremental segmentation classes (along with new training datasets or not) are
required to be added. In real clinical environment, it can be preferred that
segmentation models could be dynamically extended to segment new organs/tumors
without the (re-)access to previous training datasets due to obstacles of
patient privacy and data storage. This process can be viewed as a continual
semantic segmentation (CSS) problem, being understudied for multi-organ
segmentation. In this work, we propose a new architectural CSS learning
framework to learn a single deep segmentation model for segmenting a total of
143 whole-body organs. Using the encoder/decoder network structure, we
demonstrate that a continually-trained then frozen encoder coupled with
incrementally-added decoders can extract and preserve sufficiently
representative image features for new classes to be subsequently and validly
segmented. To maintain a single network model complexity, we trim each decoder
progressively using neural architecture search and teacher-student based
knowledge distillation. To incorporate with both healthy and pathological
organs appearing in different datasets, a novel anomaly-aware and confidence
learning module is proposed to merge the overlapped organ predictions,
originated from different decoders. Trained and validated on 3D CT scans of
2500+ patients from four datasets, our single network can segment total 143
whole-body organs with very high accuracy, closely reaching the upper bound
performance level by training four separate segmentation models (i.e., one
model per dataset/task).
Related papers
- Low-Rank Continual Pyramid Vision Transformer: Incrementally Segment Whole-Body Organs in CT with Light-Weighted Adaptation [10.746776960260297]
We propose a new continual whole-body organ segmentation model with light-weighted low-rank adaptation (LoRA)
We first train and freeze a pyramid vision transformer (PVT) base segmentation model on the initial task, then continually add light-weighted trainable LoRA parameters to the frozen model for each new learning task.
Our proposed model continually segments new organs without catastrophic forgetting and meanwhile maintaining a low parameter increasing rate.
arXiv Detail & Related papers (2024-10-07T02:00:13Z) - Universal and Extensible Language-Vision Models for Organ Segmentation and Tumor Detection from Abdominal Computed Tomography [50.08496922659307]
We propose a universal framework enabling a single model, termed Universal Model, to deal with multiple public datasets and adapt to new classes.
Firstly, we introduce a novel language-driven parameter generator that leverages language embeddings from large language models.
Secondly, the conventional output layers are replaced with lightweight, class-specific heads, allowing Universal Model to simultaneously segment 25 organs and six types of tumors.
arXiv Detail & Related papers (2024-05-28T16:55:15Z) - Teaching AI the Anatomy Behind the Scan: Addressing Anatomical Flaws in Medical Image Segmentation with Learnable Prior [34.54360931760496]
Key anatomical features, such as the number of organs, their shapes and relative positions, are crucial for building a robust multi-organ segmentation model.
We introduce a novel architecture called the Anatomy-Informed Network (AIC-Net)
AIC-Net incorporates a learnable input termed "Anatomical Prior", which can be adapted to patient-specific anatomy.
arXiv Detail & Related papers (2024-03-27T10:46:24Z) - One Model to Rule them All: Towards Universal Segmentation for Medical Images with Text Prompts [62.55349777609194]
We aim to build up a model that can Segment Anything in radiology scans, driven by Text prompts, termed as SAT.
We build up the largest and most comprehensive segmentation dataset for training, by collecting over 22K 3D medical image scans.
We have trained SAT-Nano (110M parameters) and SAT-Pro (447M parameters) demonstrating comparable performance to 72 specialist nnU-Nets trained on each dataset/subsets.
arXiv Detail & Related papers (2023-12-28T18:16:00Z) - Towards Unifying Anatomy Segmentation: Automated Generation of a
Full-body CT Dataset via Knowledge Aggregation and Anatomical Guidelines [113.08940153125616]
We generate a dataset of whole-body CT scans with $142$ voxel-level labels for 533 volumes providing comprehensive anatomical coverage.
Our proposed procedure does not rely on manual annotation during the label aggregation stage.
We release our trained unified anatomical segmentation model capable of predicting $142$ anatomical structures on CT data.
arXiv Detail & Related papers (2023-07-25T09:48:13Z) - Tailored Multi-Organ Segmentation with Model Adaptation and Ensemble [22.82094545786408]
Multi-organ segmentation is a fundamental task in medical image analysis.
Due to expensive labor costs and expertise, the availability of multi-organ annotations is usually limited.
We propose a novel dual-stage method that consists of a Model Adaptation stage and a Model Ensemble stage.
arXiv Detail & Related papers (2023-04-14T13:39:39Z) - Learning from partially labeled data for multi-organ and tumor
segmentation [102.55303521877933]
We propose a Transformer based dynamic on-demand network (TransDoDNet) that learns to segment organs and tumors on multiple datasets.
A dynamic head enables the network to accomplish multiple segmentation tasks flexibly.
We create a large-scale partially labeled Multi-Organ and Tumor benchmark, termed MOTS, and demonstrate the superior performance of our TransDoDNet over other competitors.
arXiv Detail & Related papers (2022-11-13T13:03:09Z) - One Model is All You Need: Multi-Task Learning Enables Simultaneous
Histology Image Segmentation and Classification [3.8725005247905386]
We present a multi-task learning approach for segmentation and classification of tissue regions.
We enable simultaneous prediction with a single network.
As a result of feature sharing, we also show that the learned representation can be used to improve downstream tasks.
arXiv Detail & Related papers (2022-02-28T20:22:39Z) - Generalized Organ Segmentation by Imitating One-shot Reasoning using
Anatomical Correlation [55.1248480381153]
We propose OrganNet which learns a generalized organ concept from a set of annotated organ classes and then transfer this concept to unseen classes.
We show that OrganNet can effectively resist the wide variations in organ morphology and produce state-of-the-art results in one-shot segmentation task.
arXiv Detail & Related papers (2021-03-30T13:41:12Z) - DoDNet: Learning to segment multi-organ and tumors from multiple
partially labeled datasets [102.55303521877933]
We propose a dynamic on-demand network (DoDNet) that learns to segment multiple organs and tumors on partially labelled datasets.
DoDNet consists of a shared encoder-decoder architecture, a task encoding module, a controller for generating dynamic convolution filters, and a single but dynamic segmentation head.
arXiv Detail & Related papers (2020-11-20T04:56:39Z) - 3D Segmentation Networks for Excessive Numbers of Classes: Distinct Bone
Segmentation in Upper Bodies [1.2023648183416153]
This paper discusses the intricacies of training a 3D segmentation network in a many-label setting.
We show necessary modifications in network architecture, loss function, and data augmentation.
As a result, we demonstrate the robustness of our method by automatically segmenting over one hundred distinct bones simultaneously in an end-to-end learnt fashion from a CT-scan.
arXiv Detail & Related papers (2020-10-14T12:54:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.