RestNet: Boosting Cross-Domain Few-Shot Segmentation with Residual
Transformation Network
- URL: http://arxiv.org/abs/2308.13469v2
- Date: Thu, 14 Sep 2023 01:13:21 GMT
- Title: RestNet: Boosting Cross-Domain Few-Shot Segmentation with Residual
Transformation Network
- Authors: Xinyang Huang, Chuang Zhu, Wenkai Chen
- Abstract summary: Cross-domain few-shot segmentation (CD-FSS) aims to achieve semantic segmentation in previously unseen domains with a limited number of annotated samples.
We propose a novel residual transformation network (RestNet) that facilitates knowledge transfer while retaining the intra-domain support-Query feature information.
- Score: 4.232614032390374
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Cross-domain few-shot segmentation (CD-FSS) aims to achieve semantic
segmentation in previously unseen domains with a limited number of annotated
samples. Although existing CD-FSS models focus on cross-domain feature
transformation, relying exclusively on inter-domain knowledge transfer may lead
to the loss of critical intra-domain information. To this end, we propose a
novel residual transformation network (RestNet) that facilitates knowledge
transfer while retaining the intra-domain support-query feature information.
Specifically, we propose a Semantic Enhanced Anchor Transform (SEAT) module
that maps features to a stable domain-agnostic space using advanced semantics.
Additionally, an Intra-domain Residual Enhancement (IRE) module is designed to
maintain the intra-domain representation of the original discriminant space in
the new space. We also propose a mask prediction strategy based on prototype
fusion to help the model gradually learn how to segment. Our RestNet can
transfer cross-domain knowledge from both inter-domain and intra-domain without
requiring additional fine-tuning. Extensive experiments on ISIC, Chest X-ray,
and FSS-1000 show that our RestNet achieves state-of-the-art performance. Our
code will be available soon.
Related papers
- APSeg: Auto-Prompt Network for Cross-Domain Few-Shot Semantic Segmentation [33.90244697752314]
We introduce APSeg, a novel auto-prompt network for cross-domain few-shot semantic segmentation (CD-FSS)
Our model outperforms the state-of-the-art CD-FSS method by 5.24% and 3.10% in average accuracy on 1-shot and 5-shot settings, respectively.
arXiv Detail & Related papers (2024-06-12T16:20:58Z) - Domain-Rectifying Adapter for Cross-Domain Few-Shot Segmentation [40.667166043101076]
We propose a small adapter for rectifying diverse target domain styles to the source domain.
The adapter is trained to rectify the image features from diverse synthesized target domains to align with the source domain.
Our method achieves promising results on cross-domain few-shot semantic segmentation tasks.
arXiv Detail & Related papers (2024-04-16T07:07:40Z) - Joint Identifiability of Cross-Domain Recommendation via Hierarchical Subspace Disentanglement [19.29182848154183]
Cross-Domain Recommendation (CDR) seeks to enable effective knowledge transfer across domains.
While CDR describes user representations as a joint distribution over two domains, these methods fail to account for its joint identifiability.
We propose a Hierarchical subspace disentanglement approach to explore the Joint IDentifiability of cross-domain joint distribution.
arXiv Detail & Related papers (2024-04-06T03:11:31Z) - Unsupervised Domain Adaptation via Style-Aware Self-intermediate Domain [52.783709712318405]
Unsupervised domain adaptation (UDA) has attracted considerable attention, which transfers knowledge from a label-rich source domain to a related but unlabeled target domain.
We propose a novel style-aware feature fusion method (SAFF) to bridge the large domain gap and transfer knowledge while alleviating the loss of class-discnative information.
arXiv Detail & Related papers (2022-09-05T10:06:03Z) - Domain-Agnostic Prior for Transfer Semantic Segmentation [197.9378107222422]
Unsupervised domain adaptation (UDA) is an important topic in the computer vision community.
We present a mechanism that regularizes cross-domain representation learning with a domain-agnostic prior (DAP)
Our research reveals that UDA benefits much from better proxies, possibly from other data modalities.
arXiv Detail & Related papers (2022-04-06T09:13:25Z) - Amplitude Spectrum Transformation for Open Compound Domain Adaptive
Semantic Segmentation [62.68759523116924]
Open compound domain adaptation (OCDA) has emerged as a practical adaptation setting.
We propose a novel feature space Amplitude Spectrum Transformation (AST)
arXiv Detail & Related papers (2022-02-09T05:40:34Z) - AFAN: Augmented Feature Alignment Network for Cross-Domain Object
Detection [90.18752912204778]
Unsupervised domain adaptation for object detection is a challenging problem with many real-world applications.
We propose a novel augmented feature alignment network (AFAN) which integrates intermediate domain image generation and domain-adversarial training.
Our approach significantly outperforms the state-of-the-art methods on standard benchmarks for both similar and dissimilar domain adaptations.
arXiv Detail & Related papers (2021-06-10T05:01:20Z) - Cross-Domain Grouping and Alignment for Domain Adaptive Semantic
Segmentation [74.3349233035632]
Existing techniques to adapt semantic segmentation networks across the source and target domains within deep convolutional neural networks (CNNs) do not consider an inter-class variation within the target domain itself or estimated category.
We introduce a learnable clustering module, and a novel domain adaptation framework called cross-domain grouping and alignment.
Our method consistently boosts the adaptation performance in semantic segmentation, outperforming the state-of-the-arts on various domain adaptation settings.
arXiv Detail & Related papers (2020-12-15T11:36:21Z) - Spatial Attention Pyramid Network for Unsupervised Domain Adaptation [66.75008386980869]
Unsupervised domain adaptation is critical in various computer vision tasks.
We design a new spatial attention pyramid network for unsupervised domain adaptation.
Our method performs favorably against the state-of-the-art methods by a large margin.
arXiv Detail & Related papers (2020-03-29T09:03:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.