Learning Instance-Specific Adaptation for Cross-Domain Segmentation
- URL: http://arxiv.org/abs/2203.16530v1
- Date: Wed, 30 Mar 2022 17:59:45 GMT
- Title: Learning Instance-Specific Adaptation for Cross-Domain Segmentation
- Authors: Yuliang Zou, Zizhao Zhang, Chun-Liang Li, Han Zhang, Tomas Pfister,
Jia-Bin Huang
- Abstract summary: We propose a test-time adaptation method for cross-domain image segmentation.
Given a new unseen instance at test time, we adapt a pre-trained model by conducting instance-specific BatchNorm calibration.
- Score: 79.61787982393238
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a test-time adaptation method for cross-domain image segmentation.
Our method is simple: Given a new unseen instance at test time, we adapt a
pre-trained model by conducting instance-specific BatchNorm (statistics)
calibration. Our approach has two core components. First, we replace the
manually designed BatchNorm calibration rule with a learnable module. Second,
we leverage strong data augmentation to simulate random domain shifts for
learning the calibration rule. In contrast to existing domain adaptation
methods, our method does not require accessing the target domain data at
training time or conducting computationally expensive test-time model
training/optimization. Equipping our method with models trained by standard
recipes achieves significant improvement, comparing favorably with several
state-of-the-art domain generalization and one-shot unsupervised domain
adaptation approaches. Combining our method with the domain generalization
methods further improves performance, reaching a new state of the art.
Related papers
- Leveraging Normalization Layer in Adapters With Progressive Learning and
Adaptive Distillation for Cross-Domain Few-Shot Learning [27.757318834190443]
Cross-domain few-shot learning presents a formidable challenge, as models must be trained on base classes and tested on novel classes from various domains with only a few samples at hand.
We introduce a novel generic framework that leverages normalization layer in adapters with Progressive Learning and Adaptive Distillation (ProLAD)
We deploy two strategies: a progressive training of the two adapters and an adaptive distillation technique derived from features determined by the model solely with the adapter devoid of a normalization layer.
arXiv Detail & Related papers (2023-12-18T15:02:14Z) - Adaptive Parametric Prototype Learning for Cross-Domain Few-Shot
Classification [23.82751179819225]
We develop a novel Adaptive Parametric Prototype Learning (APPL) method under the meta-learning convention for cross-domain few-shot classification.
APPL yields superior performance than many state-of-the-art cross-domain few-shot learning methods.
arXiv Detail & Related papers (2023-09-04T03:58:50Z) - NormAUG: Normalization-guided Augmentation for Domain Generalization [60.159546669021346]
We propose a simple yet effective method called NormAUG (Normalization-guided Augmentation) for deep learning.
Our method introduces diverse information at the feature level and improves the generalization of the main path.
In the test stage, we leverage an ensemble strategy to combine the predictions from the auxiliary path of our model, further boosting performance.
arXiv Detail & Related papers (2023-07-25T13:35:45Z) - Learning to Generalize across Domains on Single Test Samples [126.9447368941314]
We learn to generalize across domains on single test samples.
We formulate the adaptation to the single test sample as a variational Bayesian inference problem.
Our model achieves at least comparable -- and often better -- performance than state-of-the-art methods on multiple benchmarks for domain generalization.
arXiv Detail & Related papers (2022-02-16T13:21:04Z) - Style Mixing and Patchwise Prototypical Matching for One-Shot
Unsupervised Domain Adaptive Semantic Segmentation [21.01132797297286]
In one-shot unsupervised domain adaptation, segmentors only see one unlabeled target image during training.
We propose a new OSUDA method that can effectively relieve such computational burden.
Our method achieves new state-of-the-art performance on two commonly used benchmarks for domain adaptive semantic segmentation.
arXiv Detail & Related papers (2021-12-09T02:47:46Z) - Adaptive Methods for Real-World Domain Generalization [32.030688845421594]
In our work, we investigate whether it is possible to leverage domain information from unseen test samples themselves.
We propose a domain-adaptive approach consisting of two steps: a) we first learn a discriminative domain embedding from unsupervised training examples, and b) use this domain embedding as supplementary information to build a domain-adaptive model.
Our approach achieves state-of-the-art performance on various domain generalization benchmarks.
arXiv Detail & Related papers (2021-03-29T17:44:35Z) - Model-Based Domain Generalization [96.84818110323518]
We propose a novel approach for the domain generalization problem called Model-Based Domain Generalization.
Our algorithms beat the current state-of-the-art methods on the very-recently-proposed WILDS benchmark by up to 20 percentage points.
arXiv Detail & Related papers (2021-02-23T00:59:02Z) - Adaptive Risk Minimization: Learning to Adapt to Domain Shift [109.87561509436016]
A fundamental assumption of most machine learning algorithms is that the training and test data are drawn from the same underlying distribution.
In this work, we consider the problem setting of domain generalization, where the training data are structured into domains and there may be multiple test time shifts.
We introduce the framework of adaptive risk minimization (ARM), in which models are directly optimized for effective adaptation to shift by learning to adapt on the training domains.
arXiv Detail & Related papers (2020-07-06T17:59:30Z) - FDA: Fourier Domain Adaptation for Semantic Segmentation [82.4963423086097]
We describe a simple method for unsupervised domain adaptation, whereby the discrepancy between the source and target distributions is reduced by swapping the low-frequency spectrum of one with the other.
We illustrate the method in semantic segmentation, where densely annotated images are aplenty in one domain, but difficult to obtain in another.
Our results indicate that even simple procedures can discount nuisance variability in the data that more sophisticated methods struggle to learn away.
arXiv Detail & Related papers (2020-04-11T22:20:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.