A Classifier-Free Incremental Learning Framework for Scalable Medical Image Segmentation
- URL: http://arxiv.org/abs/2405.16328v1
- Date: Sat, 25 May 2024 19:05:07 GMT
- Title: A Classifier-Free Incremental Learning Framework for Scalable Medical Image Segmentation
- Authors: Xiaoyang Chen, Hao Zheng, Yifang Xie, Yuncong Ma, Tengfei Li,
- Abstract summary: We introduce a novel segmentation paradigm enabling the segmentation of a variable number of classes within a single classifier-free network.
This network is trained using contrastive learning and produces discriminative feature representations that facilitate straightforward interpretation.
We demonstrate the flexibility of our method in handling varying class numbers within a unified network and its capacity for incremental learning.
- Score: 6.591403935303867
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Current methods for developing foundation models in medical image segmentation rely on two primary assumptions: a fixed set of classes and the immediate availability of a substantial and diverse training dataset. However, this can be impractical due to the evolving nature of imaging technology and patient demographics, as well as labor-intensive data curation, limiting their practical applicability and scalability. To address these challenges, we introduce a novel segmentation paradigm enabling the segmentation of a variable number of classes within a single classifier-free network, featuring an architecture independent of class number. This network is trained using contrastive learning and produces discriminative feature representations that facilitate straightforward interpretation. Additionally, we integrate this strategy into a knowledge distillation-based incremental learning framework, facilitating the gradual assimilation of new information from non-stationary data streams while avoiding catastrophic forgetting. Our approach provides a unified solution for tackling both class- and domain-incremental learning scenarios. We demonstrate the flexibility of our method in handling varying class numbers within a unified network and its capacity for incremental learning. Experimental results on an incompletely annotated, multi-modal, multi-source dataset for medical image segmentation underscore its superiority over state-of-the-art alternative approaches.
Related papers
- Exploiting Minority Pseudo-Labels for Semi-Supervised Semantic Segmentation in Autonomous Driving [2.638145329894673]
We propose a professional training module to enhance minority class learning and a general training module to learn more comprehensive semantic information.
In experiments, our framework demonstrates superior performance compared to state-of-the-art methods on benchmark datasets.
arXiv Detail & Related papers (2024-09-19T11:47:25Z) - Learning Invariant Molecular Representation in Latent Discrete Space [52.13724532622099]
We propose a new framework for learning molecular representations that exhibit invariance and robustness against distribution shifts.
Our model achieves stronger generalization against state-of-the-art baselines in the presence of various distribution shifts.
arXiv Detail & Related papers (2023-10-22T04:06:44Z) - Explainable Semantic Medical Image Segmentation with Style [7.074258860680265]
We propose a fully supervised generative framework that can achieve generalisable segmentation with only limited labelled data.
The proposed approach creates medical image style paired with a segmentation task driven discriminator incorporating end-to-end adversarial training.
Experiments on a fully semantic, publicly available pelvis dataset demonstrated that our method is more generalisable to shifts than other state-of-the-art methods.
arXiv Detail & Related papers (2023-03-10T04:34:51Z) - Voxel-wise Adversarial Semi-supervised Learning for Medical Image
Segmentation [4.489713477369384]
We introduce a novel adversarial learning-based semi-supervised segmentation method for medical image segmentation.
Our method embeds both local and global features from multiple hidden layers and learns context relations between multiple classes.
Our method outperforms current best-performing state-of-the-art semi-supervised learning approaches on the image segmentation of the left atrium (single class) and multiorgan datasets (multiclass)
arXiv Detail & Related papers (2022-05-14T06:57:19Z) - LifeLonger: A Benchmark for Continual Disease Classification [59.13735398630546]
We introduce LifeLonger, a benchmark for continual disease classification on the MedMNIST collection.
Task and class incremental learning of diseases address the issue of classifying new samples without re-training the models from scratch.
Cross-domain incremental learning addresses the issue of dealing with datasets originating from different institutions while retaining the previously obtained knowledge.
arXiv Detail & Related papers (2022-04-12T12:25:05Z) - SATS: Self-Attention Transfer for Continual Semantic Segmentation [50.51525791240729]
continual semantic segmentation suffers from the same catastrophic forgetting issue as in continual classification learning.
This study proposes to transfer a new type of information relevant to knowledge, i.e. the relationships between elements within each image.
The relationship information can be effectively obtained from the self-attention maps in a Transformer-style segmentation model.
arXiv Detail & Related papers (2022-03-15T06:09:28Z) - Consistency and Diversity induced Human Motion Segmentation [231.36289425663702]
We propose a novel Consistency and Diversity induced human Motion (CDMS) algorithm.
Our model factorizes the source and target data into distinct multi-layer feature spaces.
A multi-mutual learning strategy is carried out to reduce the domain gap between the source and target data.
arXiv Detail & Related papers (2022-02-10T06:23:56Z) - Continual Active Learning Using Pseudo-Domains for Limited Labelling
Resources and Changing Acquisition Characteristics [2.6105699925188257]
Machine learning in medical imaging during clinical routine is impaired by changes in scanner protocols, hardware, or policies.
We propose a method for continual active learning operating on a stream of medical images in a multi-scanner setting.
arXiv Detail & Related papers (2021-11-25T13:11:49Z) - Learning Debiased and Disentangled Representations for Semantic
Segmentation [52.35766945827972]
We propose a model-agnostic and training scheme for semantic segmentation.
By randomly eliminating certain class information in each training iteration, we effectively reduce feature dependencies among classes.
Models trained with our approach demonstrate strong results on multiple semantic segmentation benchmarks.
arXiv Detail & Related papers (2021-10-31T16:15:09Z) - Dense Contrastive Visual-Linguistic Pretraining [53.61233531733243]
Several multimodal representation learning approaches have been proposed that jointly represent image and text.
These approaches achieve superior performance by capturing high-level semantic information from large-scale multimodal pretraining.
We propose unbiased Dense Contrastive Visual-Linguistic Pretraining to replace the region regression and classification with cross-modality region contrastive learning.
arXiv Detail & Related papers (2021-09-24T07:20:13Z) - Semi-supervised few-shot learning for medical image segmentation [21.349705243254423]
Recent attempts to alleviate the need for large annotated datasets have developed training strategies under the few-shot learning paradigm.
We propose a novel few-shot learning framework for semantic segmentation, where unlabeled images are also made available at each episode.
We show that including unlabeled surrogate tasks in the episodic training leads to more powerful feature representations.
arXiv Detail & Related papers (2020-03-18T20:37:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.