Continual learning of longitudinal health records
- URL: http://arxiv.org/abs/2112.11944v1
- Date: Wed, 22 Dec 2021 15:08:45 GMT
- Title: Continual learning of longitudinal health records
- Authors: J. Armstrong, D. Clifton
- Abstract summary: We evaluate a variety of continual learning methods on longitudinal ICU data.
We find that while several methods mitigate short-term forgetting, domain shift remains a challenging problem over large series of tasks.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Continual learning denotes machine learning methods which can adapt to new
environments while retaining and reusing knowledge gained from past
experiences. Such methods address two issues encountered by models in
non-stationary environments: ungeneralisability to new data, and the
catastrophic forgetting of previous knowledge when retrained. This is a
pervasive problem in clinical settings where patient data exhibits covariate
shift not only between populations, but also continuously over time. However,
while continual learning methods have seen nascent success in the imaging
domain, they have been little applied to the multi-variate sequential data
characteristic of critical care patient recordings.
Here we evaluate a variety of continual learning methods on longitudinal ICU
data in a series of representative healthcare scenarios. We find that while
several methods mitigate short-term forgetting, domain shift remains a
challenging problem over large series of tasks, with only replay based methods
achieving stable long-term performance.
Code for reproducing all experiments can be found at
https://github.com/iacobo/continual
Related papers
- Enhancing Consistency and Mitigating Bias: A Data Replay Approach for
Incremental Learning [100.7407460674153]
Deep learning systems are prone to catastrophic forgetting when learning from a sequence of tasks.
To mitigate the problem, a line of methods propose to replay the data of experienced tasks when learning new tasks.
However, it is not expected in practice considering the memory constraint or data privacy issue.
As a replacement, data-free data replay methods are proposed by inverting samples from the classification model.
arXiv Detail & Related papers (2024-01-12T12:51:12Z) - Towards Robust Continual Learning with Bayesian Adaptive Moment Regularization [51.34904967046097]
Continual learning seeks to overcome the challenge of catastrophic forgetting, where a model forgets previously learnt information.
We introduce a novel prior-based method that better constrains parameter growth, reducing catastrophic forgetting.
Results show that BAdam achieves state-of-the-art performance for prior-based methods on challenging single-headed class-incremental experiments.
arXiv Detail & Related papers (2023-09-15T17:10:51Z) - Segmentation of Multiple Sclerosis Lesions across Hospitals: Learn
Continually or Train from Scratch? [8.691839346510116]
Experience replay is a well-known continual learning method.
We show that replay is able to achieve positive backward transfer and reduce catastrophic forgetting.
Our experiments show that replay is able to achieve positive backward transfer and reduce catastrophic forgetting.
arXiv Detail & Related papers (2022-10-27T00:32:13Z) - Continual Learning with Bayesian Model based on a Fixed Pre-trained
Feature Extractor [55.9023096444383]
Current deep learning models are characterised by catastrophic forgetting of old knowledge when learning new classes.
Inspired by the process of learning new knowledge in human brains, we propose a Bayesian generative model for continual learning.
arXiv Detail & Related papers (2022-04-28T08:41:51Z) - LifeLonger: A Benchmark for Continual Disease Classification [59.13735398630546]
We introduce LifeLonger, a benchmark for continual disease classification on the MedMNIST collection.
Task and class incremental learning of diseases address the issue of classifying new samples without re-training the models from scratch.
Cross-domain incremental learning addresses the issue of dealing with datasets originating from different institutions while retaining the previously obtained knowledge.
arXiv Detail & Related papers (2022-04-12T12:25:05Z) - Federated Cycling (FedCy): Semi-supervised Federated Learning of
Surgical Phases [57.90226879210227]
FedCy is a semi-supervised learning (FSSL) method that combines FL and self-supervised learning to exploit a decentralized dataset of both labeled and unlabeled videos.
We demonstrate significant performance gains over state-of-the-art FSSL methods on the task of automatic recognition of surgical phases.
arXiv Detail & Related papers (2022-03-14T17:44:53Z) - Reducing Catastrophic Forgetting in Self Organizing Maps with
Internally-Induced Generative Replay [67.50637511633212]
A lifelong learning agent is able to continually learn from potentially infinite streams of pattern sensory data.
One major historic difficulty in building agents that adapt is that neural systems struggle to retain previously-acquired knowledge when learning from new samples.
This problem is known as catastrophic forgetting (interference) and remains an unsolved problem in the domain of machine learning to this day.
arXiv Detail & Related papers (2021-12-09T07:11:14Z) - Continual Active Learning Using Pseudo-Domains for Limited Labelling
Resources and Changing Acquisition Characteristics [2.6105699925188257]
Machine learning in medical imaging during clinical routine is impaired by changes in scanner protocols, hardware, or policies.
We propose a method for continual active learning operating on a stream of medical images in a multi-scanner setting.
arXiv Detail & Related papers (2021-11-25T13:11:49Z) - Adversarial Continual Learning for Multi-Domain Hippocampal Segmentation [0.46023882211671957]
Deep learning for medical imaging suffers from temporal and privacy-related restrictions on data availability.
We propose an architecture that leverages the simultaneous availability of two or more datasets to learn a disentanglement between the content and domain.
We showcase that our method reduces catastrophic forgetting and outperforms state-of-the-art continual learning methods.
arXiv Detail & Related papers (2021-07-19T10:55:21Z) - What is Wrong with Continual Learning in Medical Image Segmentation? [1.2020488155038649]
Continual learning protocols are attracting increasing attention from the medical imaging community.
In a continual setup, data from different sources arrives sequentially and each batch is only available for a limited period.
We show that the benchmark outperforms two popular continual learning methods for the task of T2-weighted MR prostate segmentation.
arXiv Detail & Related papers (2020-10-21T13:48:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.