Adversarial Continual Learning for Multi-Domain Hippocampal Segmentation
- URL: http://arxiv.org/abs/2107.08751v3
- Date: Wed, 21 Jul 2021 07:10:28 GMT
- Title: Adversarial Continual Learning for Multi-Domain Hippocampal Segmentation
- Authors: Marius Memmel, Camila Gonzalez, Anirban Mukhopadhyay
- Abstract summary: Deep learning for medical imaging suffers from temporal and privacy-related restrictions on data availability.
We propose an architecture that leverages the simultaneous availability of two or more datasets to learn a disentanglement between the content and domain.
We showcase that our method reduces catastrophic forgetting and outperforms state-of-the-art continual learning methods.
- Score: 0.46023882211671957
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep learning for medical imaging suffers from temporal and privacy-related
restrictions on data availability. To still obtain viable models, continual
learning aims to train in sequential order, as and when data is available. The
main challenge that continual learning methods face is to prevent catastrophic
forgetting, i.e., a decrease in performance on the data encountered earlier.
This issue makes continuous training of segmentation models for medical
applications extremely difficult. Yet, often, data from at least two different
domains is available which we can exploit to train the model in a way that it
disregards domain-specific information. We propose an architecture that
leverages the simultaneous availability of two or more datasets to learn a
disentanglement between the content and domain in an adversarial fashion. The
domain-invariant content representation then lays the base for continual
semantic segmentation. Our approach takes inspiration from domain adaptation
and combines it with continual learning for hippocampal segmentation in brain
MRI. We showcase that our method reduces catastrophic forgetting and
outperforms state-of-the-art continual learning methods.
Related papers
- A Classifier-Free Incremental Learning Framework for Scalable Medical Image Segmentation [6.591403935303867]
We introduce a novel segmentation paradigm enabling the segmentation of a variable number of classes within a single classifier-free network.
This network is trained using contrastive learning and produces discriminative feature representations that facilitate straightforward interpretation.
We demonstrate the flexibility of our method in handling varying class numbers within a unified network and its capacity for incremental learning.
arXiv Detail & Related papers (2024-05-25T19:05:07Z) - Explainable Semantic Medical Image Segmentation with Style [7.074258860680265]
We propose a fully supervised generative framework that can achieve generalisable segmentation with only limited labelled data.
The proposed approach creates medical image style paired with a segmentation task driven discriminator incorporating end-to-end adversarial training.
Experiments on a fully semantic, publicly available pelvis dataset demonstrated that our method is more generalisable to shifts than other state-of-the-art methods.
arXiv Detail & Related papers (2023-03-10T04:34:51Z) - Segmentation of Multiple Sclerosis Lesions across Hospitals: Learn
Continually or Train from Scratch? [8.691839346510116]
Experience replay is a well-known continual learning method.
We show that replay is able to achieve positive backward transfer and reduce catastrophic forgetting.
Our experiments show that replay is able to achieve positive backward transfer and reduce catastrophic forgetting.
arXiv Detail & Related papers (2022-10-27T00:32:13Z) - Forget Less, Count Better: A Domain-Incremental Self-Distillation
Learning Benchmark for Lifelong Crowd Counting [51.44987756859706]
Off-the-shelf methods have some drawbacks to handle multiple domains.
Lifelong Crowd Counting aims at alleviating the catastrophic forgetting and improving the generalization ability.
arXiv Detail & Related papers (2022-05-06T15:37:56Z) - LifeLonger: A Benchmark for Continual Disease Classification [59.13735398630546]
We introduce LifeLonger, a benchmark for continual disease classification on the MedMNIST collection.
Task and class incremental learning of diseases address the issue of classifying new samples without re-training the models from scratch.
Cross-domain incremental learning addresses the issue of dealing with datasets originating from different institutions while retaining the previously obtained knowledge.
arXiv Detail & Related papers (2022-04-12T12:25:05Z) - Practical Challenges in Differentially-Private Federated Survival
Analysis of Medical Data [57.19441629270029]
In this paper, we take advantage of the inherent properties of neural networks to federate the process of training of survival analysis models.
In the realistic setting of small medical datasets and only a few data centers, this noise makes it harder for the models to converge.
We propose DPFed-post which adds a post-processing stage to the private federated learning scheme.
arXiv Detail & Related papers (2022-02-08T10:03:24Z) - Continual learning of longitudinal health records [0.0]
We evaluate a variety of continual learning methods on longitudinal ICU data.
We find that while several methods mitigate short-term forgetting, domain shift remains a challenging problem over large series of tasks.
arXiv Detail & Related papers (2021-12-22T15:08:45Z) - Reducing Catastrophic Forgetting in Self Organizing Maps with
Internally-Induced Generative Replay [67.50637511633212]
A lifelong learning agent is able to continually learn from potentially infinite streams of pattern sensory data.
One major historic difficulty in building agents that adapt is that neural systems struggle to retain previously-acquired knowledge when learning from new samples.
This problem is known as catastrophic forgetting (interference) and remains an unsolved problem in the domain of machine learning to this day.
arXiv Detail & Related papers (2021-12-09T07:11:14Z) - Adversarial Semantic Hallucination for Domain Generalized Semantic
Segmentation [50.14933487082085]
We propose an adversarial hallucination approach, which combines a class-wise hallucination module and a semantic segmentation module.
Experiments on state of the art domain adaptation work demonstrate the efficacy of our proposed method when no target domain data are available for training.
arXiv Detail & Related papers (2021-06-08T07:07:45Z) - What is Wrong with Continual Learning in Medical Image Segmentation? [1.2020488155038649]
Continual learning protocols are attracting increasing attention from the medical imaging community.
In a continual setup, data from different sources arrives sequentially and each batch is only available for a limited period.
We show that the benchmark outperforms two popular continual learning methods for the task of T2-weighted MR prostate segmentation.
arXiv Detail & Related papers (2020-10-21T13:48:37Z) - Towards Cross-modality Medical Image Segmentation with Online Mutual
Knowledge Distillation [71.89867233426597]
In this paper, we aim to exploit the prior knowledge learned from one modality to improve the segmentation performance on another modality.
We propose a novel Mutual Knowledge Distillation scheme to thoroughly exploit the modality-shared knowledge.
Experimental results on the public multi-class cardiac segmentation data, i.e., MMWHS 2017, show that our method achieves large improvements on CT segmentation.
arXiv Detail & Related papers (2020-10-04T10:25:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.