Domain-incremental Cardiac Image Segmentation with Style-oriented Replay
and Domain-sensitive Feature Whitening
- URL: http://arxiv.org/abs/2211.04862v1
- Date: Wed, 9 Nov 2022 13:07:36 GMT
- Title: Domain-incremental Cardiac Image Segmentation with Style-oriented Replay
and Domain-sensitive Feature Whitening
- Authors: Kang Li, Lequan Yu, and Pheng-Ann Heng
- Abstract summary: M&Ms should incrementally learn from each incoming dataset and progressively update with improved functionality as time goes by.
In medical scenarios, this is particularly challenging as accessing or storing past data is commonly not allowed due to data privacy.
We propose a novel domain-incremental learning framework to recover past domain inputs first and then regularly replay them during model optimization.
- Score: 67.6394526631557
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Contemporary methods have shown promising results on cardiac image
segmentation, but merely in static learning, i.e., optimizing the network once
for all, ignoring potential needs for model updating. In real-world scenarios,
new data continues to be gathered from multiple institutions over time and new
demands keep growing to pursue more satisfying performance. The desired model
should incrementally learn from each incoming dataset and progressively update
with improved functionality as time goes by. As the datasets sequentially
delivered from multiple sites are normally heterogenous with domain
discrepancy, each updated model should not catastrophically forget previously
learned domains while well generalizing to currently arrived domains or even
unseen domains. In medical scenarios, this is particularly challenging as
accessing or storing past data is commonly not allowed due to data privacy. To
this end, we propose a novel domain-incremental learning framework to recover
past domain inputs first and then regularly replay them during model
optimization. Particularly, we first present a style-oriented replay module to
enable structure-realistic and memory-efficient reproduction of past data, and
then incorporate the replayed past data to jointly optimize the model with
current data to alleviate catastrophic forgetting. During optimization, we
additionally perform domain-sensitive feature whitening to suppress model's
dependency on features that are sensitive to domain changes (e.g.,
domain-distinctive style features) to assist domain-invariant feature
exploration and gradually improve the generalization performance of the
network. We have extensively evaluated our approach with the M&Ms Dataset in
single-domain and compound-domain incremental learning settings with improved
performance over other comparison approaches.
Related papers
- StyDeSty: Min-Max Stylization and Destylization for Single Domain Generalization [85.18995948334592]
Single domain generalization (single DG) aims at learning a robust model generalizable to unseen domains from only one training domain.
State-of-the-art approaches have mostly relied on data augmentations, such as adversarial perturbation and style enhancement, to synthesize new data.
We propose emphStyDeSty, which explicitly accounts for the alignment of the source and pseudo domains in the process of data augmentation.
arXiv Detail & Related papers (2024-06-01T02:41:34Z) - Progressive Conservative Adaptation for Evolving Target Domains [76.9274842289221]
Conventional domain adaptation typically transfers knowledge from a source domain to a stationary target domain.
Restoring and adapting to such target data results in escalating computational and resource consumption over time.
We propose a simple yet effective approach, termed progressive conservative adaptation (PCAda)
arXiv Detail & Related papers (2024-02-07T04:11:25Z) - Memory-Efficient Prompt Tuning for Incremental Histopathology
Classification [69.46798702300042]
We present a memory-efficient prompt tuning framework to cultivate model generalization potential in economical memory cost.
We have extensively evaluated our framework with two histopathology tasks, i.e., breast cancer metastasis classification and epithelium-stroma tissue classification.
arXiv Detail & Related papers (2024-01-22T03:24:45Z) - Forget Less, Count Better: A Domain-Incremental Self-Distillation
Learning Benchmark for Lifelong Crowd Counting [51.44987756859706]
Off-the-shelf methods have some drawbacks to handle multiple domains.
Lifelong Crowd Counting aims at alleviating the catastrophic forgetting and improving the generalization ability.
arXiv Detail & Related papers (2022-05-06T15:37:56Z) - Multi-Domain Incremental Learning for Semantic Segmentation [42.30646442211311]
We propose a dynamic architecture that assigns universally shared, domain-invariant parameters to capture homogeneous semantic features.
We demonstrate the effectiveness of our proposed solution on domain incremental settings pertaining to real-world driving scenes from roads of Germany (Cityscapes), the United States (BDD100k), and India (IDD)
arXiv Detail & Related papers (2021-10-23T12:21:42Z) - Cross-domain Time Series Forecasting with Attention Sharing [10.180248006928107]
We propose a novel domain adaptation framework,Domain Adaptation Forecaster (DAF), to cope with the issue of data scarcity.
In particular, we pro-pose an attention-based shared module with a do-main discriminator across domains as well as pri-vate modules for individual domains.
This allowsus to jointly train the source and target domains bygenerating domain-invariant latent features whileretraining domain-specific features.
arXiv Detail & Related papers (2021-02-13T00:26:35Z) - Domain Adaptation for Semantic Parsing [68.81787666086554]
We propose a novel semantic for domain adaptation, where we have much fewer annotated data in the target domain compared to the source domain.
Our semantic benefits from a two-stage coarse-to-fine framework, thus can provide different and accurate treatments for the two stages.
Experiments on a benchmark dataset show that our method consistently outperforms several popular domain adaptation strategies.
arXiv Detail & Related papers (2020-06-23T14:47:41Z) - Latent Domain Learning with Dynamic Residual Adapters [26.018759356470767]
A practical shortcoming of deep neural networks is their specialization to a single task and domain.
Here we focus on a less explored, but more realistic case: learning from data from multiple domains, without access to domain annotations.
We address this limitation via dynamic residual adapters, an adaptive gating mechanism that helps account for latent domains.
arXiv Detail & Related papers (2020-06-01T15:00:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.