Few-Shot Domain Adaptation with Polymorphic Transformers
- URL: http://arxiv.org/abs/2107.04805v1
- Date: Sat, 10 Jul 2021 10:08:57 GMT
- Title: Few-Shot Domain Adaptation with Polymorphic Transformers
- Authors: Shaohua Li, Xiuchao Sui, Jie Fu, Huazhu Fu, Xiangde Luo, Yangqin Feng,
Xinxing Xu, Yong Liu, Daniel Ting, Rick Siow Mong Goh
- Abstract summary: Deep neural networks (DNNs) trained on one set of medical images often experience severe performance drop on unseen test images.
Few-shot domain adaptation, i.e., adapting a trained model with a handful of annotations, is highly practical and useful in this case.
We propose a Polymorphic Transformer (Polyformer) which can be incorporated into any DNN backbone for few-shot domain adaptation.
- Score: 50.128636842853155
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks (DNNs) trained on one set of medical images often
experience severe performance drop on unseen test images, due to various domain
discrepancy between the training images (source domain) and the test images
(target domain), which raises a domain adaptation issue. In clinical settings,
it is difficult to collect enough annotated target domain data in a short
period. Few-shot domain adaptation, i.e., adapting a trained model with a
handful of annotations, is highly practical and useful in this case. In this
paper, we propose a Polymorphic Transformer (Polyformer), which can be
incorporated into any DNN backbones for few-shot domain adaptation.
Specifically, after the polyformer layer is inserted into a model trained on
the source domain, it extracts a set of prototype embeddings, which can be
viewed as a "basis" of the source-domain features. On the target domain, the
polyformer layer adapts by only updating a projection layer which controls the
interactions between image features and the prototype embeddings. All other
model weights (except BatchNorm parameters) are frozen during adaptation. Thus,
the chance of overfitting the annotations is greatly reduced, and the model can
perform robustly on the target domain after being trained on a few annotated
images. We demonstrate the effectiveness of Polyformer on two medical
segmentation tasks (i.e., optic disc/cup segmentation, and polyp segmentation).
The source code of Polyformer is released at
https://github.com/askerlee/segtran.
Related papers
- Spectral Adversarial MixUp for Few-Shot Unsupervised Domain Adaptation [72.70876977882882]
Domain shift is a common problem in clinical applications, where the training images (source domain) and the test images (target domain) are under different distributions.
We propose a novel method for Few-Shot Unsupervised Domain Adaptation (FSUDA), where only a limited number of unlabeled target domain samples are available for training.
arXiv Detail & Related papers (2023-09-03T16:02:01Z) - Compositional Semantic Mix for Domain Adaptation in Point Cloud
Segmentation [65.78246406460305]
compositional semantic mixing represents the first unsupervised domain adaptation technique for point cloud segmentation.
We present a two-branch symmetric network architecture capable of concurrently processing point clouds from a source domain (e.g. synthetic) and point clouds from a target domain (e.g. real-world)
arXiv Detail & Related papers (2023-08-28T14:43:36Z) - Zero-shot Generative Model Adaptation via Image-specific Prompt Learning [41.344908073632986]
CLIP-guided image synthesis has shown appealing performance on adapting a pre-trained source-domain generator to an unseen target domain.
We propose an Image-specific Prompt Learning (IPL) method, which learns specific prompt vectors for each source-domain image.
IPL effectively improves the quality and diversity of synthesized images and alleviates the mode collapse.
arXiv Detail & Related papers (2023-04-06T14:48:13Z) - DynaGAN: Dynamic Few-shot Adaptation of GANs to Multiple Domains [26.95350186287616]
Few-shot domain adaptation to multiple domains aims to learn a complex image distribution across multiple domains from a few training images.
We propose DynaGAN, a novel few-shot domain-adaptation method for multiple target domains.
arXiv Detail & Related papers (2022-11-26T12:46:40Z) - Feather-Light Fourier Domain Adaptation in Magnetic Resonance Imaging [2.024988885579277]
Generalizability of deep learning models may be severely affected by the difference in the distributions of the train (source domain) and the test (target domain) sets.
We propose a very light and transparent approach to perform test-time domain adaptation.
arXiv Detail & Related papers (2022-07-31T17:28:42Z) - Polymorphic-GAN: Generating Aligned Samples across Multiple Domains with
Learned Morph Maps [94.10535575563092]
We introduce a generative adversarial network that can simultaneously generate aligned image samples from multiple related domains.
We propose Polymorphic-GAN which learns shared features across all domains and a per-domain morph layer to morph shared features according to each domain.
arXiv Detail & Related papers (2022-06-06T21:03:02Z) - PIT: Position-Invariant Transform for Cross-FoV Domain Adaptation [53.428312630479816]
We observe that the Field of View (FoV) gap induces noticeable instance appearance differences between the source and target domains.
Motivated by the observations, we propose the textbfPosition-Invariant Transform (PIT) to better align images in different domains.
arXiv Detail & Related papers (2021-08-16T15:16:47Z) - Semi-Supervised Domain Adaptation with Prototypical Alignment and
Consistency Learning [86.6929930921905]
This paper studies how much it can help address domain shifts if we further have a few target samples labeled.
To explore the full potential of landmarks, we incorporate a prototypical alignment (PA) module which calculates a target prototype for each class from the landmarks.
Specifically, we severely perturb the labeled images, making PA non-trivial to achieve and thus promoting model generalizability.
arXiv Detail & Related papers (2021-04-19T08:46:08Z) - Self domain adapted network [6.040230864736051]
Domain shift is a major problem for deploying deep networks in clinical practice.
We propose a novel self domain adapted network (SDA-Net) that can rapidly adapt itself to a single test subject.
arXiv Detail & Related papers (2020-07-07T01:41:34Z) - Source-Relaxed Domain Adaptation for Image Segmentation [22.28746775804126]
Domain adaptation (DA) has drawn high interests for its capacity to adapt a model trained on labeled source data to perform well on unlabeled or weakly labeled target data.
Most common DA techniques require the concurrent access to the input images of both the source and target domains.
We propose a novel formulation for adapting segmentation networks, which relaxes such a constraint.
arXiv Detail & Related papers (2020-05-07T18:46:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.