Continual Active Learning Using Pseudo-Domains for Limited Labelling
Resources and Changing Acquisition Characteristics
- URL: http://arxiv.org/abs/2111.13069v1
- Date: Thu, 25 Nov 2021 13:11:49 GMT
- Title: Continual Active Learning Using Pseudo-Domains for Limited Labelling
Resources and Changing Acquisition Characteristics
- Authors: Matthias Perkonigg, Johannes Hofmanninger, Christian Herold, Helmut
Prosch, Georg Langs
- Abstract summary: Machine learning in medical imaging during clinical routine is impaired by changes in scanner protocols, hardware, or policies.
We propose a method for continual active learning operating on a stream of medical images in a multi-scanner setting.
- Score: 2.6105699925188257
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning in medical imaging during clinical routine is impaired by
changes in scanner protocols, hardware, or policies resulting in a
heterogeneous set of acquisition settings. When training a deep learning model
on an initial static training set, model performance and reliability suffer
from changes of acquisition characteristics as data and targets may become
inconsistent. Continual learning can help to adapt models to the changing
environment by training on a continuous data stream. However, continual manual
expert labelling of medical imaging requires substantial effort. Thus, ways to
use labelling resources efficiently on a well chosen sub-set of new examples is
necessary to render this strategy feasible.
Here, we propose a method for continual active learning operating on a stream
of medical images in a multi-scanner setting. The approach automatically
recognizes shifts in image acquisition characteristics - new domains -, selects
optimal examples for labelling and adapts training accordingly. Labelling is
subject to a limited budget, resembling typical real world scenarios. To
demonstrate generalizability, we evaluate the effectiveness of our method on
three tasks: cardiac segmentation, lung nodule detection and brain age
estimation. Results show that the proposed approach outperforms other active
learning methods, while effectively counteracting catastrophic forgetting.
Related papers
- A Classifier-Free Incremental Learning Framework for Scalable Medical Image Segmentation [6.591403935303867]
We introduce a novel segmentation paradigm enabling the segmentation of a variable number of classes within a single classifier-free network.
This network is trained using contrastive learning and produces discriminative feature representations that facilitate straightforward interpretation.
We demonstrate the flexibility of our method in handling varying class numbers within a unified network and its capacity for incremental learning.
arXiv Detail & Related papers (2024-05-25T19:05:07Z) - Boosting Few-Shot Learning with Disentangled Self-Supervised Learning and Meta-Learning for Medical Image Classification [8.975676404678374]
We present a strategy for improving the performance and generalization capabilities of models trained in low-data regimes.
The proposed method starts with a pre-training phase, where features learned in a self-supervised learning setting are disentangled to improve the robustness of the representations for downstream tasks.
We then introduce a meta-fine-tuning step, leveraging related classes between meta-training and meta-testing phases but varying the level.
arXiv Detail & Related papers (2024-03-26T09:36:20Z) - Taxonomy Adaptive Cross-Domain Adaptation in Medical Imaging via
Optimization Trajectory Distillation [73.83178465971552]
The success of automated medical image analysis depends on large-scale and expert-annotated training sets.
Unsupervised domain adaptation (UDA) has been raised as a promising approach to alleviate the burden of labeled data collection.
We propose optimization trajectory distillation, a unified approach to address the two technical challenges from a new perspective.
arXiv Detail & Related papers (2023-07-27T08:58:05Z) - Self-Supervised-RCNN for Medical Image Segmentation with Limited Data
Annotation [0.16490701092527607]
We propose an alternative deep learning training strategy based on self-supervised pretraining on unlabeled MRI scans.
Our pretraining approach first, randomly applies different distortions to random areas of unlabeled images and then predicts the type of distortions and loss of information.
The effectiveness of the proposed method for segmentation tasks in different pre-training and fine-tuning scenarios is evaluated.
arXiv Detail & Related papers (2022-07-17T13:28:52Z) - Continual Learning with Bayesian Model based on a Fixed Pre-trained
Feature Extractor [55.9023096444383]
Current deep learning models are characterised by catastrophic forgetting of old knowledge when learning new classes.
Inspired by the process of learning new knowledge in human brains, we propose a Bayesian generative model for continual learning.
arXiv Detail & Related papers (2022-04-28T08:41:51Z) - LifeLonger: A Benchmark for Continual Disease Classification [59.13735398630546]
We introduce LifeLonger, a benchmark for continual disease classification on the MedMNIST collection.
Task and class incremental learning of diseases address the issue of classifying new samples without re-training the models from scratch.
Cross-domain incremental learning addresses the issue of dealing with datasets originating from different institutions while retaining the previously obtained knowledge.
arXiv Detail & Related papers (2022-04-12T12:25:05Z) - BERT WEAVER: Using WEight AVERaging to enable lifelong learning for
transformer-based models in biomedical semantic search engines [49.75878234192369]
We present WEAVER, a simple, yet efficient post-processing method that infuses old knowledge into the new model.
We show that applying WEAVER in a sequential manner results in similar word embedding distributions as doing a combined training on all data at once.
arXiv Detail & Related papers (2022-02-21T10:34:41Z) - Continual Active Learning for Efficient Adaptation of Machine Learning
Models to Changing Image Acquisition [3.205205037629335]
We propose a method for continual active learning on a data stream of medical images.
It recognizes shifts or additions of new imaging sources - domains - and adapts training accordingly.
Results demonstrate that the proposed method outperforms naive active learning while requiring less manual labelling.
arXiv Detail & Related papers (2021-06-07T05:39:06Z) - A Multi-Stage Attentive Transfer Learning Framework for Improving
COVID-19 Diagnosis [49.3704402041314]
We propose a multi-stage attentive transfer learning framework for improving COVID-19 diagnosis.
Our proposed framework consists of three stages to train accurate diagnosis models through learning knowledge from multiple source tasks and data of different domains.
Importantly, we propose a novel self-supervised learning method to learn multi-scale representations for lung CT images.
arXiv Detail & Related papers (2021-01-14T01:39:19Z) - Dynamic memory to alleviate catastrophic forgetting in continuous
learning settings [2.7259816320747627]
Technical progress or changes in diagnostic procedures lead to a continuous change in image appearance.
Such domain and task shifts limit the applicability of machine learning algorithms in the clinical routine.
We adapt a model to unseen variations in the source domain while counteracting catastrophic forgetting effects.
arXiv Detail & Related papers (2020-07-06T11:02:38Z) - Confident Coreset for Active Learning in Medical Image Analysis [57.436224561482966]
We propose a novel active learning method, confident coreset, which considers both uncertainty and distribution for effectively selecting informative samples.
By comparative experiments on two medical image analysis tasks, we show that our method outperforms other active learning methods.
arXiv Detail & Related papers (2020-04-05T13:46:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.