Learning towards Synchronous Network Memorizability and Generalizability
for Continual Segmentation across Multiple Sites
- URL: http://arxiv.org/abs/2206.06813v1
- Date: Tue, 14 Jun 2022 13:04:36 GMT
- Title: Learning towards Synchronous Network Memorizability and Generalizability
for Continual Segmentation across Multiple Sites
- Authors: Jingyang Zhang, Peng Xue, Ran Gu, Yuning Gu, Mianxin Liu, Yongsheng
Pan, Zhiming Cui, Jiawei Huang, Lei Ma, Dinggang Shen
- Abstract summary: In clinical practice, a segmentation network is often required to continually learn on a sequential data stream from multiple sites.
Existing methods are usually restricted in either network memorizability on previous sites or generalizability on unseen sites.
This paper aims to tackle the problem of Synchronous Memorizability and Generalizability with a novel proposed SMG-learning framework.
- Score: 52.84959869494459
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In clinical practice, a segmentation network is often required to continually
learn on a sequential data stream from multiple sites rather than a
consolidated set, due to the storage cost and privacy restriction. However,
during the continual learning process, existing methods are usually restricted
in either network memorizability on previous sites or generalizability on
unseen sites. This paper aims to tackle the challenging problem of Synchronous
Memorizability and Generalizability (SMG) and to simultaneously improve
performance on both previous and unseen sites, with a novel proposed
SMG-learning framework. First, we propose a Synchronous Gradient Alignment
(SGA) objective, which \emph{not only} promotes the network memorizability by
enforcing coordinated optimization for a small exemplar set from previous sites
(called replay buffer), \emph{but also} enhances the generalizability by
facilitating site-invariance under simulated domain shift. Second, to simplify
the optimization of SGA objective, we design a Dual-Meta algorithm that
approximates the SGA objective as dual meta-objectives for optimization without
expensive computation overhead. Third, for efficient rehearsal, we configure
the replay buffer comprehensively considering additional inter-site diversity
to reduce redundancy. Experiments on prostate MRI data sequentially acquired
from six institutes demonstrate that our method can simultaneously achieve
higher memorizability and generalizability over state-of-the-art methods. Code
is available at https://github.com/jingyzhang/SMG-Learning.
Related papers
- Towards Synchronous Memorizability and Generalizability with Site-Modulated Diffusion Replay for Cross-Site Continual Segmentation [50.70671908078593]
This paper proposes a novel training paradigm, learning towards Synchronous Memorizability and Generalizability (SMG-Learning)
We create the orientational gradient alignment to ensure memorizability on previous sites, and arbitrary gradient alignment to enhance generalizability on unseen sites.
Experimental results show that our method efficiently enhances both memorizability and generalizablity better than other state-of-the-art methods.
arXiv Detail & Related papers (2024-06-26T03:10:57Z) - Unleashing Network Potentials for Semantic Scene Completion [50.95486458217653]
This paper proposes a novel SSC framework - Adrial Modality Modulation Network (AMMNet)
AMMNet introduces two core modules: a cross-modal modulation enabling the interdependence of gradient flows between modalities, and a customized adversarial training scheme leveraging dynamic gradient competition.
Extensive experimental results demonstrate that AMMNet outperforms state-of-the-art SSC methods by a large margin.
arXiv Detail & Related papers (2024-03-12T11:48:49Z) - Meta-Learning Adversarial Bandit Algorithms [55.72892209124227]
We study online meta-learning with bandit feedback.
We learn to tune online mirror descent generalization (OMD) with self-concordant barrier regularizers.
arXiv Detail & Related papers (2023-07-05T13:52:10Z) - Generalized Few-Shot Continual Learning with Contrastive Mixture of
Adapters [59.82088750033897]
We set up a Generalized FSCL (GFSCL) protocol involving both class- and domain-incremental situations.
We find that common continual learning methods have poor generalization ability on unseen domains.
In this way, we propose a rehearsal-free framework based on Vision Transformer (ViT) named Contrastive Mixture of Adapters (CMoA)
arXiv Detail & Related papers (2023-02-12T15:18:14Z) - Towards Lightweight Cross-domain Sequential Recommendation via External
Attention-enhanced Graph Convolution Network [7.1102362215550725]
Cross-domain Sequential Recommendation (CSR) depicts the evolution of behavior patterns for overlapped users by modeling their interactions from multiple domains.
We introduce a lightweight external attention-enhanced GCN-based framework to solve the above challenges, namely LEA-GCN.
To further alleviate the framework structure and aggregate the user-specific sequential pattern, we devise a novel dual-channel External Attention (EA) component.
arXiv Detail & Related papers (2023-02-07T03:06:29Z) - Incremental Learning Meets Transfer Learning: Application to Multi-site
Prostate MRI Segmentation [16.50535949349874]
We propose a novel multi-site segmentation framework called incremental-transfer learning (ITL)
ITL learns a model from multi-site datasets in an end-to-end sequential fashion.
We show for the first time that leveraging our ITL training scheme is able to alleviate challenging catastrophic problems in incremental learning.
arXiv Detail & Related papers (2022-06-03T02:32:01Z) - Latent-Optimized Adversarial Neural Transfer for Sarcasm Detection [50.29565896287595]
We apply transfer learning to exploit common datasets for sarcasm detection.
We propose a generalized latent optimization strategy that allows different losses to accommodate each other.
In particular, we achieve 10.02% absolute performance gain over the previous state of the art on the iSarcasm dataset.
arXiv Detail & Related papers (2021-04-19T13:07:52Z) - Spatial-Scale Aligned Network for Fine-Grained Recognition [42.71878867504503]
Existing approaches for fine-grained visual recognition focus on learning marginal region-based representations.
We propose the spatial-scale aligned network (SSANET) and implicitly address misalignments during the recognition process.
arXiv Detail & Related papers (2020-01-05T11:12:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.