Learning Incrementally to Segment Multiple Organs in a CT Image
- URL: http://arxiv.org/abs/2203.02100v1
- Date: Fri, 4 Mar 2022 02:32:04 GMT
- Title: Learning Incrementally to Segment Multiple Organs in a CT Image
- Authors: Pengbo Liu, Xia Wang, Mengsi Fan, Hongli Pan, Minmin Yin, Xiaohong
Zhu, Dandan Du, Xiaoying Zhao, Li Xiao, Lian Ding, Xingwang Wu, and S. Kevin
Zhou
- Abstract summary: We propose to incrementally learn a multi-organ segmentation model.
In each incremental learning stage, we lose the access to previous data and annotations.
We experimentally discover that such a weakness mostly disappears for CT multi-organ segmentation.
- Score: 11.082692639365982
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: There exists a large number of datasets for organ segmentation, which are
partially annotated and sequentially constructed. A typical dataset is
constructed at a certain time by curating medical images and annotating the
organs of interest. In other words, new datasets with annotations of new organ
categories are built over time. To unleash the potential behind these partially
labeled, sequentially-constructed datasets, we propose to incrementally learn a
multi-organ segmentation model. In each incremental learning (IL) stage, we
lose the access to previous data and annotations, whose knowledge is assumingly
captured by the current model, and gain the access to a new dataset with
annotations of new organ categories, from which we learn to update the organ
segmentation model to include the new organs. While IL is notorious for its
`catastrophic forgetting' weakness in the context of natural image analysis, we
experimentally discover that such a weakness mostly disappears for CT
multi-organ segmentation. To further stabilize the model performance across the
IL stages, we introduce a light memory module and some loss functions to
restrain the representation of different categories in feature space,
aggregating feature representation of the same class and separating feature
representation of different classes. Extensive experiments on five open-sourced
datasets are conducted to illustrate the effectiveness of our method.
Related papers
- Data Augmentation for Surgical Scene Segmentation with Anatomy-Aware Diffusion Models [1.9085155846692308]
We introduce a multi-stage approach to generate multi-class surgical datasets with annotations.
Our framework improves anatomy awareness by training organ specific models with an inpainting objective guided by binary segmentation masks.
This versatile approach allows the generation of multi-class datasets from real binary datasets and simulated surgical masks.
arXiv Detail & Related papers (2024-10-10T09:29:23Z) - Universal and Extensible Language-Vision Models for Organ Segmentation and Tumor Detection from Abdominal Computed Tomography [50.08496922659307]
We propose a universal framework enabling a single model, termed Universal Model, to deal with multiple public datasets and adapt to new classes.
Firstly, we introduce a novel language-driven parameter generator that leverages language embeddings from large language models.
Secondly, the conventional output layers are replaced with lightweight, class-specific heads, allowing Universal Model to simultaneously segment 25 organs and six types of tumors.
arXiv Detail & Related papers (2024-05-28T16:55:15Z) - UniCell: Universal Cell Nucleus Classification via Prompt Learning [76.11864242047074]
We propose a universal cell nucleus classification framework (UniCell)
It employs a novel prompt learning mechanism to uniformly predict the corresponding categories of pathological images from different dataset domains.
In particular, our framework adopts an end-to-end architecture for nuclei detection and classification, and utilizes flexible prediction heads for adapting various datasets.
arXiv Detail & Related papers (2024-02-20T11:50:27Z) - Continual Learning for Abdominal Multi-Organ and Tumor Segmentation [15.983529525062938]
We propose an innovative architecture designed specifically for continuous organ and tumor segmentation.
Our proposed design involves replacing the conventional output layer with a suite of lightweight, class-specific heads.
These heads enable independent predictions for newly introduced and previously learned classes, effectively minimizing the impact of new classes on old ones.
arXiv Detail & Related papers (2023-06-01T17:59:57Z) - Learning from Temporal Spatial Cubism for Cross-Dataset Skeleton-based
Action Recognition [88.34182299496074]
Action labels are only available on a source dataset, but unavailable on a target dataset in the training stage.
We utilize a self-supervision scheme to reduce the domain shift between two skeleton-based action datasets.
By segmenting and permuting temporal segments or human body parts, we design two self-supervised learning classification tasks.
arXiv Detail & Related papers (2022-07-17T07:05:39Z) - Learning Debiased and Disentangled Representations for Semantic
Segmentation [52.35766945827972]
We propose a model-agnostic and training scheme for semantic segmentation.
By randomly eliminating certain class information in each training iteration, we effectively reduce feature dependencies among classes.
Models trained with our approach demonstrate strong results on multiple semantic segmentation benchmarks.
arXiv Detail & Related papers (2021-10-31T16:15:09Z) - Generalized Organ Segmentation by Imitating One-shot Reasoning using
Anatomical Correlation [55.1248480381153]
We propose OrganNet which learns a generalized organ concept from a set of annotated organ classes and then transfer this concept to unseen classes.
We show that OrganNet can effectively resist the wide variations in organ morphology and produce state-of-the-art results in one-shot segmentation task.
arXiv Detail & Related papers (2021-03-30T13:41:12Z) - Incremental Learning for Multi-organ Segmentation with Partially Labeled
Datasets [8.370590211748087]
We learn a multi-organ segmentation model through incremental learning (IL)
In each IL stage, we lose access to the previous annotations, whose knowledge is assumingly captured by the current model.
We learn to update the organ segmentation model to include the new organs.
arXiv Detail & Related papers (2021-03-08T03:15:59Z) - Towards Robust Partially Supervised Multi-Structure Medical Image
Segmentation on Small-Scale Data [123.03252888189546]
We propose Vicinal Labels Under Uncertainty (VLUU) to bridge the methodological gaps in partially supervised learning (PSL) under data scarcity.
Motivated by multi-task learning and vicinal risk minimization, VLUU transforms the partially supervised problem into a fully supervised problem by generating vicinal labels.
Our research suggests a new research direction in label-efficient deep learning with partial supervision.
arXiv Detail & Related papers (2020-11-28T16:31:00Z) - Multi-organ Segmentation via Co-training Weight-averaged Models from
Few-organ Datasets [45.14004510709325]
We propose to co-train weight-averaged models for learning a unified multi-organ segmentation network from few-organ datasets.
To alleviate the noisy teaching supervisions between the networks, the weighted-averaged models are adopted to produce more reliable soft labels.
arXiv Detail & Related papers (2020-08-17T08:39:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.