Continual Active Learning for Efficient Adaptation of Machine Learning
Models to Changing Image Acquisition
- URL: http://arxiv.org/abs/2106.03351v1
- Date: Mon, 7 Jun 2021 05:39:06 GMT
- Title: Continual Active Learning for Efficient Adaptation of Machine Learning
Models to Changing Image Acquisition
- Authors: Matthias Perkonigg, Johannes Hofmanninger, Georg Langs
- Abstract summary: We propose a method for continual active learning on a data stream of medical images.
It recognizes shifts or additions of new imaging sources - domains - and adapts training accordingly.
Results demonstrate that the proposed method outperforms naive active learning while requiring less manual labelling.
- Score: 3.205205037629335
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Imaging in clinical routine is subject to changing scanner protocols,
hardware, or policies in a typically heterogeneous set of acquisition hardware.
Accuracy and reliability of deep learning models suffer from those changes as
data and targets become inconsistent with their initial static training set.
Continual learning can adapt to a continuous data stream of a changing imaging
environment. Here, we propose a method for continual active learning on a data
stream of medical images. It recognizes shifts or additions of new imaging
sources - domains -, adapts training accordingly, and selects optimal examples
for labelling. Model training has to cope with a limited labelling budget,
resembling typical real world scenarios. We demonstrate our method on
T1-weighted magnetic resonance images from three different scanners with the
task of brain age estimation. Results demonstrate that the proposed method
outperforms naive active learning while requiring less manual labelling.
Related papers
- Disruptive Autoencoders: Leveraging Low-level features for 3D Medical
Image Pre-training [51.16994853817024]
This work focuses on designing an effective pre-training framework for 3D radiology images.
We introduce Disruptive Autoencoders, a pre-training framework that attempts to reconstruct the original image from disruptions created by a combination of local masking and low-level perturbations.
The proposed pre-training framework is tested across multiple downstream tasks and achieves state-of-the-art performance.
arXiv Detail & Related papers (2023-07-31T17:59:42Z) - Domain Generalization for Mammographic Image Analysis with Contrastive
Learning [62.25104935889111]
The training of an efficacious deep learning model requires large data with diverse styles and qualities.
A novel contrastive learning is developed to equip the deep learning models with better style generalization capability.
The proposed method has been evaluated extensively and rigorously with mammograms from various vendor style domains and several public datasets.
arXiv Detail & Related papers (2023-04-20T11:40:21Z) - Vision-Language Modelling For Radiological Imaging and Reports In The
Low Data Regime [70.04389979779195]
This paper explores training medical vision-language models (VLMs) where the visual and language inputs are embedded into a common space.
We explore several candidate methods to improve low-data performance, including adapting generic pre-trained models to novel image and text domains.
Using text-to-image retrieval as a benchmark, we evaluate the performance of these methods with variable sized training datasets of paired chest X-rays and radiological reports.
arXiv Detail & Related papers (2023-03-30T18:20:00Z) - Continual Active Learning Using Pseudo-Domains for Limited Labelling
Resources and Changing Acquisition Characteristics [2.6105699925188257]
Machine learning in medical imaging during clinical routine is impaired by changes in scanner protocols, hardware, or policies.
We propose a method for continual active learning operating on a stream of medical images in a multi-scanner setting.
arXiv Detail & Related papers (2021-11-25T13:11:49Z) - Domain Generalization on Medical Imaging Classification using Episodic
Training with Task Augmentation [62.49837463676111]
We propose a novel scheme of episodic training with task augmentation on medical imaging classification.
Motivated by the limited number of source domains in real-world medical deployment, we consider the unique task-level overfitting.
arXiv Detail & Related papers (2021-06-13T03:56:59Z) - About Explicit Variance Minimization: Training Neural Networks for
Medical Imaging With Limited Data Annotations [2.3204178451683264]
Variance Aware Training (VAT) method exploits this property by introducing the variance error into the model loss function.
We validate VAT on three medical imaging datasets from diverse domains and various learning objectives.
arXiv Detail & Related papers (2021-05-28T21:34:04Z) - A Multi-Stage Attentive Transfer Learning Framework for Improving
COVID-19 Diagnosis [49.3704402041314]
We propose a multi-stage attentive transfer learning framework for improving COVID-19 diagnosis.
Our proposed framework consists of three stages to train accurate diagnosis models through learning knowledge from multiple source tasks and data of different domains.
Importantly, we propose a novel self-supervised learning method to learn multi-scale representations for lung CT images.
arXiv Detail & Related papers (2021-01-14T01:39:19Z) - Medical Image Harmonization Using Deep Learning Based Canonical Mapping:
Toward Robust and Generalizable Learning in Imaging [4.396671464565882]
We propose a new paradigm in which data from a diverse range of acquisition conditions are "harmonized" to a common reference domain.
We test this approach on two example problems, namely MRI-based brain age prediction and classification of schizophrenia.
arXiv Detail & Related papers (2020-10-11T22:01:37Z) - Learning to Learn Parameterized Classification Networks for Scalable
Input Images [76.44375136492827]
Convolutional Neural Networks (CNNs) do not have a predictable recognition behavior with respect to the input resolution change.
We employ meta learners to generate convolutional weights of main networks for various input scales.
We further utilize knowledge distillation on the fly over model predictions based on different input resolutions.
arXiv Detail & Related papers (2020-07-13T04:27:25Z) - Dynamic memory to alleviate catastrophic forgetting in continuous
learning settings [2.7259816320747627]
Technical progress or changes in diagnostic procedures lead to a continuous change in image appearance.
Such domain and task shifts limit the applicability of machine learning algorithms in the clinical routine.
We adapt a model to unseen variations in the source domain while counteracting catastrophic forgetting effects.
arXiv Detail & Related papers (2020-07-06T11:02:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.