Leveraging Old Knowledge to Continually Learn New Classes in Medical
Images
- URL: http://arxiv.org/abs/2303.13752v1
- Date: Fri, 24 Mar 2023 02:10:53 GMT
- Title: Leveraging Old Knowledge to Continually Learn New Classes in Medical
Images
- Authors: Evelyn Chee, Mong Li Lee, Wynne Hsu
- Abstract summary: We focus on how old knowledge can be leveraged to learn new classes without catastrophic forgetting.
Our solution is able to achieve superior performance over state-of-the-art baselines in terms of class accuracy and forgetting.
- Score: 16.730335437094592
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Class-incremental continual learning is a core step towards developing
artificial intelligence systems that can continuously adapt to changes in the
environment by learning new concepts without forgetting those previously
learned. This is especially needed in the medical domain where continually
learning from new incoming data is required to classify an expanded set of
diseases. In this work, we focus on how old knowledge can be leveraged to learn
new classes without catastrophic forgetting. We propose a framework that
comprises of two main components: (1) a dynamic architecture with expanding
representations to preserve previously learned features and accommodate new
features; and (2) a training procedure alternating between two objectives to
balance the learning of new features while maintaining the model's performance
on old classes. Experiment results on multiple medical datasets show that our
solution is able to achieve superior performance over state-of-the-art
baselines in terms of class accuracy and forgetting.
Related papers
- Evolving Knowledge Mining for Class Incremental Segmentation [113.59611699693092]
Class Incremental Semantic (CISS) has been a trend recently due to its great significance in real-world applications.
We propose a novel method, Evolving kNowleDge minING, employing a frozen backbone.
We evaluate our method on two widely used benchmarks and consistently demonstrate new state-of-the-art performance.
arXiv Detail & Related papers (2023-06-03T07:03:15Z) - A Forward and Backward Compatible Framework for Few-shot Class-incremental Pill Recognition [24.17119669744624]
This paper introduces the first few-shot class-incremental pill recognition framework.
It encompasses forward-compatible and backward-compatible learning components.
Our experimental results demonstrate that our framework surpasses existing State-of-the-art (SOTA) methods.
arXiv Detail & Related papers (2023-04-24T09:53:21Z) - Adapter Learning in Pretrained Feature Extractor for Continual Learning
of Diseases [66.27889778566734]
Currently intelligent diagnosis systems lack the ability of continually learning to diagnose new diseases once deployed.
In particular, updating an intelligent diagnosis system with training data of new diseases would cause catastrophic forgetting of old disease knowledge.
An adapter-based Continual Learning framework called ACL is proposed to help effectively learn a set of new diseases.
arXiv Detail & Related papers (2023-04-18T15:01:45Z) - Class-Incremental Learning of Plant and Disease Detection: Growing
Branches with Knowledge Distillation [0.0]
This paper investigates the problem of class-incremental object detection for agricultural applications.
We adapt two public datasets to include new categories over time, simulating a more realistic and dynamic scenario.
We compare three class-incremental learning methods that leverage different forms of knowledge distillation to mitigate catastrophic forgetting.
arXiv Detail & Related papers (2023-04-13T15:40:41Z) - Adaptively Integrated Knowledge Distillation and Prediction Uncertainty
for Continual Learning [71.43841235954453]
Current deep learning models often suffer from catastrophic forgetting of old knowledge when continually learning new knowledge.
Existing strategies to alleviate this issue often fix the trade-off between keeping old knowledge (stability) and learning new knowledge (plasticity)
arXiv Detail & Related papers (2023-01-18T05:36:06Z) - Continual Learning with Bayesian Model based on a Fixed Pre-trained
Feature Extractor [55.9023096444383]
Current deep learning models are characterised by catastrophic forgetting of old knowledge when learning new classes.
Inspired by the process of learning new knowledge in human brains, we propose a Bayesian generative model for continual learning.
arXiv Detail & Related papers (2022-04-28T08:41:51Z) - LifeLonger: A Benchmark for Continual Disease Classification [59.13735398630546]
We introduce LifeLonger, a benchmark for continual disease classification on the MedMNIST collection.
Task and class incremental learning of diseases address the issue of classifying new samples without re-training the models from scratch.
Cross-domain incremental learning addresses the issue of dealing with datasets originating from different institutions while retaining the previously obtained knowledge.
arXiv Detail & Related papers (2022-04-12T12:25:05Z) - The Role of Bio-Inspired Modularity in General Learning [0.0]
One goal of general intelligence is to learn novel information without overwriting prior learning.
bootstrapping previous knowledge may allow for faster learning of a novel task.
modularity may offer a solution to weight-update learning methods that adheres to the learning without catastrophic forgetting and bootstrapping constraints.
arXiv Detail & Related papers (2021-09-23T18:45:34Z) - Few-Shot Class-Incremental Learning [68.75462849428196]
We focus on a challenging but practical few-shot class-incremental learning (FSCIL) problem.
FSCIL requires CNN models to incrementally learn new classes from very few labelled samples, without forgetting the previously learned ones.
We represent the knowledge using a neural gas (NG) network, which can learn and preserve the topology of the feature manifold formed by different classes.
arXiv Detail & Related papers (2020-04-23T03:38:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.