Continual Learning for Abdominal Multi-Organ and Tumor Segmentation
- URL: http://arxiv.org/abs/2306.00988v2
- Date: Fri, 21 Jul 2023 11:27:10 GMT
- Title: Continual Learning for Abdominal Multi-Organ and Tumor Segmentation
- Authors: Yixiao Zhang, Xinyi Li, Huimiao Chen, Alan Yuille, Yaoyao Liu, Zongwei
Zhou
- Abstract summary: We propose an innovative architecture designed specifically for continuous organ and tumor segmentation.
Our proposed design involves replacing the conventional output layer with a suite of lightweight, class-specific heads.
These heads enable independent predictions for newly introduced and previously learned classes, effectively minimizing the impact of new classes on old ones.
- Score: 15.983529525062938
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The ability to dynamically extend a model to new data and classes is critical
for multiple organ and tumor segmentation. However, due to privacy regulations,
accessing previous data and annotations can be problematic in the medical
domain. This poses a significant barrier to preserving the high segmentation
accuracy of the old classes when learning from new classes because of the
catastrophic forgetting problem. In this paper, we first empirically
demonstrate that simply using high-quality pseudo labels can fairly mitigate
this problem in the setting of organ segmentation. Furthermore, we put forward
an innovative architecture designed specifically for continuous organ and tumor
segmentation, which incurs minimal computational overhead. Our proposed design
involves replacing the conventional output layer with a suite of lightweight,
class-specific heads, thereby offering the flexibility to accommodate newly
emerging classes. These heads enable independent predictions for newly
introduced and previously learned classes, effectively minimizing the impact of
new classes on old ones during the course of continual learning. We further
propose incorporating Contrastive Language-Image Pretraining (CLIP) embeddings
into the organ-specific heads. These embeddings encapsulate the semantic
information of each class, informed by extensive image-text co-training. The
proposed method is evaluated on both in-house and public abdominal CT datasets
under organ and tumor segmentation tasks. Empirical results suggest that the
proposed design improves the segmentation performance of a baseline neural
network on newly-introduced and previously-learned classes along the learning
trajectory.
Related papers
- Leveraging Textual Anatomical Knowledge for Class-Imbalanced Semi-Supervised Multi-Organ Segmentation [29.70206595766246]
Annotating 3D medical images demands substantial time and expertise.
The complex anatomical structures of organs often lead to significant class imbalances.
We propose a novel approach that integrates textual anatomical knowledge (TAK) into the segmentation model.
arXiv Detail & Related papers (2025-01-23T08:40:54Z) - Universal and Extensible Language-Vision Models for Organ Segmentation and Tumor Detection from Abdominal Computed Tomography [50.08496922659307]
We propose a universal framework enabling a single model, termed Universal Model, to deal with multiple public datasets and adapt to new classes.
Firstly, we introduce a novel language-driven parameter generator that leverages language embeddings from large language models.
Secondly, the conventional output layers are replaced with lightweight, class-specific heads, allowing Universal Model to simultaneously segment 25 organs and six types of tumors.
arXiv Detail & Related papers (2024-05-28T16:55:15Z) - Tendency-driven Mutual Exclusivity for Weakly Supervised Incremental Semantic Segmentation [56.1776710527814]
Weakly Incremental Learning for Semantic (WILSS) leverages a pre-trained segmentation model to segment new classes using cost-effective and readily available image-level labels.
A prevailing way to solve WILSS is the generation of seed areas for each new class, serving as a form of pixel-level supervision.
We propose an innovative, tendency-driven relationship of mutual exclusivity, meticulously tailored to govern the behavior of the seed areas.
arXiv Detail & Related papers (2024-04-18T08:23:24Z) - Continual Segment: Towards a Single, Unified and Accessible Continual
Segmentation Model of 143 Whole-body Organs in CT Scans [31.388497540849297]
We propose a new architectural CSS learning framework to learn a single deep segmentation model for segmenting a total of 143 whole-body organs.
We trained and validated on 3D CT scans of 2500+ patients from four datasets, our single network can segment total 143 whole-body organs with very high accuracy.
arXiv Detail & Related papers (2023-02-01T00:49:21Z) - Continual Learning with Bayesian Model based on a Fixed Pre-trained
Feature Extractor [55.9023096444383]
Current deep learning models are characterised by catastrophic forgetting of old knowledge when learning new classes.
Inspired by the process of learning new knowledge in human brains, we propose a Bayesian generative model for continual learning.
arXiv Detail & Related papers (2022-04-28T08:41:51Z) - Learning Incrementally to Segment Multiple Organs in a CT Image [11.082692639365982]
We propose to incrementally learn a multi-organ segmentation model.
In each incremental learning stage, we lose the access to previous data and annotations.
We experimentally discover that such a weakness mostly disappears for CT multi-organ segmentation.
arXiv Detail & Related papers (2022-03-04T02:32:04Z) - Modeling the Background for Incremental and Weakly-Supervised Semantic
Segmentation [39.025848280224785]
We introduce a novel incremental class learning approach for semantic segmentation.
Since each training step provides annotation only for a subset of all possible classes, pixels of the background class exhibit a semantic shift.
We demonstrate the effectiveness of our approach with an extensive evaluation on the Pascal-VOC, ADE20K, and Cityscapes datasets.
arXiv Detail & Related papers (2022-01-31T16:33:21Z) - Generalized Organ Segmentation by Imitating One-shot Reasoning using
Anatomical Correlation [55.1248480381153]
We propose OrganNet which learns a generalized organ concept from a set of annotated organ classes and then transfer this concept to unseen classes.
We show that OrganNet can effectively resist the wide variations in organ morphology and produce state-of-the-art results in one-shot segmentation task.
arXiv Detail & Related papers (2021-03-30T13:41:12Z) - Incremental Learning for Multi-organ Segmentation with Partially Labeled
Datasets [8.370590211748087]
We learn a multi-organ segmentation model through incremental learning (IL)
In each IL stage, we lose access to the previous annotations, whose knowledge is assumingly captured by the current model.
We learn to update the organ segmentation model to include the new organs.
arXiv Detail & Related papers (2021-03-08T03:15:59Z) - Towards Robust Partially Supervised Multi-Structure Medical Image
Segmentation on Small-Scale Data [123.03252888189546]
We propose Vicinal Labels Under Uncertainty (VLUU) to bridge the methodological gaps in partially supervised learning (PSL) under data scarcity.
Motivated by multi-task learning and vicinal risk minimization, VLUU transforms the partially supervised problem into a fully supervised problem by generating vicinal labels.
Our research suggests a new research direction in label-efficient deep learning with partial supervision.
arXiv Detail & Related papers (2020-11-28T16:31:00Z) - A Teacher-Student Framework for Semi-supervised Medical Image
Segmentation From Mixed Supervision [62.4773770041279]
We develop a semi-supervised learning framework based on a teacher-student fashion for organ and lesion segmentation.
We show our model is robust to the quality of bounding box and achieves comparable performance compared with full-supervised learning methods.
arXiv Detail & Related papers (2020-10-23T07:58:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.