Incremental Learning for Heterogeneous Structure Segmentation in Brain
Tumor MRI
- URL: http://arxiv.org/abs/2305.19404v1
- Date: Tue, 30 May 2023 20:39:03 GMT
- Title: Incremental Learning for Heterogeneous Structure Segmentation in Brain
Tumor MRI
- Authors: Xiaofeng Liu, Helen A. Shih, Fangxu Xing, Emiliano Santarnecchi,
Georges El Fakhri, Jonghye Woo
- Abstract summary: We propose a divergence-aware dual-flow module with balanced rigidity and plasticity branches to decouple old and new tasks.
We evaluate our framework on a brain tumor segmentation task with continually changing target domains.
- Score: 11.314017805825685
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep learning (DL) models for segmenting various anatomical structures have
achieved great success via a static DL model that is trained in a single source
domain. Yet, the static DL model is likely to perform poorly in a continually
evolving environment, requiring appropriate model updates. In an incremental
learning setting, we would expect that well-trained static models are updated,
following continually evolving target domain data -- e.g., additional lesions
or structures of interest -- collected from different sites, without
catastrophic forgetting. This, however, poses challenges, due to distribution
shifts, additional structures not seen during the initial model training, and
the absence of training data in a source domain. To address these challenges,
in this work, we seek to progressively evolve an ``off-the-shelf" trained
segmentation model to diverse datasets with additional anatomical categories in
a unified manner. Specifically, we first propose a divergence-aware dual-flow
module with balanced rigidity and plasticity branches to decouple old and new
tasks, which is guided by continuous batch renormalization. Then, a
complementary pseudo-label training scheme with self-entropy regularized
momentum MixUp decay is developed for adaptive network optimization. We
evaluated our framework on a brain tumor segmentation task with continually
changing target domains -- i.e., new MRI scanners/modalities with incremental
structures. Our framework was able to well retain the discriminability of
previously learned structures, hence enabling the realistic life-long
segmentation model extension along with the widespread accumulation of big
medical data.
Related papers
- Advancing Brain Imaging Analysis Step-by-step via Progressive Self-paced Learning [0.5840945370755134]
We introduce the Progressive Self-Paced Distillation (PSPD) framework, employing an adaptive and progressive pacing and distillation mechanism.
We validate PSPD's efficacy and adaptability across various convolutional neural networks using the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset.
arXiv Detail & Related papers (2024-07-23T02:26:04Z) - Curriculum-Based Augmented Fourier Domain Adaptation for Robust Medical
Image Segmentation [18.830738606514736]
This work proposes the Curriculum-based Augmented Fourier Domain Adaptation (Curri-AFDA) for robust medical image segmentation.
In particular, our curriculum learning strategy is based on the causal relationship of a model under different levels of data shift.
Experiments on two segmentation tasks of Retina and Nuclei collected from multiple sites and scanners suggest that our proposed method yields superior adaptation and generalization performance.
arXiv Detail & Related papers (2023-06-06T08:56:58Z) - TWINS: A Fine-Tuning Framework for Improved Transferability of
Adversarial Robustness and Generalization [89.54947228958494]
This paper focuses on the fine-tuning of an adversarially pre-trained model in various classification tasks.
We propose a novel statistics-based approach, Two-WIng NormliSation (TWINS) fine-tuning framework.
TWINS is shown to be effective on a wide range of image classification datasets in terms of both generalization and robustness.
arXiv Detail & Related papers (2023-03-20T14:12:55Z) - Learning to Augment via Implicit Differentiation for Domain
Generalization [107.9666735637355]
Domain generalization (DG) aims to overcome the problem by leveraging multiple source domains to learn a domain-generalizable model.
In this paper, we propose a novel augmentation-based DG approach, dubbed AugLearn.
AugLearn shows effectiveness on three standard DG benchmarks, PACS, Office-Home and Digits-DG.
arXiv Detail & Related papers (2022-10-25T18:51:51Z) - DST: Dynamic Substitute Training for Data-free Black-box Attack [79.61601742693713]
We propose a novel dynamic substitute training attack method to encourage substitute model to learn better and faster from the target model.
We introduce a task-driven graph-based structure information learning constrain to improve the quality of generated training data.
arXiv Detail & Related papers (2022-04-03T02:29:11Z) - Reducing Catastrophic Forgetting in Self Organizing Maps with
Internally-Induced Generative Replay [67.50637511633212]
A lifelong learning agent is able to continually learn from potentially infinite streams of pattern sensory data.
One major historic difficulty in building agents that adapt is that neural systems struggle to retain previously-acquired knowledge when learning from new samples.
This problem is known as catastrophic forgetting (interference) and remains an unsolved problem in the domain of machine learning to this day.
arXiv Detail & Related papers (2021-12-09T07:11:14Z) - Addressing catastrophic forgetting for medical domain expansion [9.720534481714953]
Model brittleness is a key concern when deploying deep learning models in real-world medical settings.
A model that has high performance at one institution may suffer a significant decline in performance when tested at other institutions.
We develop an approach to address catastrophic forget-ting based on elastic weight consolidation combined with modulation of batch normalization statistics.
arXiv Detail & Related papers (2021-03-24T22:33:38Z) - The unreasonable effectiveness of Batch-Norm statistics in addressing
catastrophic forgetting across medical institutions [8.244654685687054]
We investigate trade-off between model refinement and retention of previously learned knowledge.
We propose a simple yet effective approach, adapting Elastic weight consolidation (EWC) using the global batch normalization statistics of the original dataset.
arXiv Detail & Related papers (2020-11-16T16:57:05Z) - S2RMs: Spatially Structured Recurrent Modules [105.0377129434636]
We take a step towards exploiting dynamic structure that are capable of simultaneously exploiting both modular andtemporal structures.
We find our models to be robust to the number of available views and better capable of generalization to novel tasks without additional training.
arXiv Detail & Related papers (2020-07-13T17:44:30Z) - Shape-aware Meta-learning for Generalizing Prostate MRI Segmentation to
Unseen Domains [68.73614619875814]
We present a novel shape-aware meta-learning scheme to improve the model generalization in prostate MRI segmentation.
Experimental results show that our approach outperforms many state-of-the-art generalization methods consistently across all six settings of unseen domains.
arXiv Detail & Related papers (2020-07-04T07:56:02Z) - Multi-site fMRI Analysis Using Privacy-preserving Federated Learning and
Domain Adaptation: ABIDE Results [13.615292855384729]
To train a high-quality deep learning model, the aggregation of a significant amount of patient information is required.
Due to the need to protect the privacy of patient data, it is hard to assemble a central database from multiple institutions.
Federated learning allows for population-level models to be trained without centralizing entities' data.
arXiv Detail & Related papers (2020-01-16T04:49:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.