Learning to Generalize across Domains on Single Test Samples
- URL: http://arxiv.org/abs/2202.08045v1
- Date: Wed, 16 Feb 2022 13:21:04 GMT
- Title: Learning to Generalize across Domains on Single Test Samples
- Authors: Zehao Xiao, Xiantong Zhen, Ling Shao, Cees G. M. Snoek
- Abstract summary: We learn to generalize across domains on single test samples.
We formulate the adaptation to the single test sample as a variational Bayesian inference problem.
Our model achieves at least comparable -- and often better -- performance than state-of-the-art methods on multiple benchmarks for domain generalization.
- Score: 126.9447368941314
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We strive to learn a model from a set of source domains that generalizes well
to unseen target domains. The main challenge in such a domain generalization
scenario is the unavailability of any target domain data during training,
resulting in the learned model not being explicitly adapted to the unseen
target domains. We propose learning to generalize across domains on single test
samples. We leverage a meta-learning paradigm to learn our model to acquire the
ability of adaptation with single samples at training time so as to further
adapt itself to each single test sample at test time. We formulate the
adaptation to the single test sample as a variational Bayesian inference
problem, which incorporates the test sample as a conditional into the
generation of model parameters. The adaptation to each test sample requires
only one feed-forward computation at test time without any fine-tuning or
self-supervised training on additional data from the unseen domains. Extensive
ablation studies demonstrate that our model learns the ability to adapt models
to each single sample by mimicking domain shifts during training. Further, our
model achieves at least comparable -- and often better -- performance than
state-of-the-art methods on multiple benchmarks for domain generalization.
Related papers
- Beyond Model Adaptation at Test Time: A Survey [43.03129492126422]
Machine learning algorithms struggle when samples in the test distribution start to deviate from the ones observed during training.
Test-time adaptation combines the benefits of domain adaptation and domain generalization by training models only on source data.
We provide a comprehensive and systematic review on test-time adaptation, covering more than 400 recent papers.
arXiv Detail & Related papers (2024-11-06T06:13:57Z) - Point-TTA: Test-Time Adaptation for Point Cloud Registration Using
Multitask Meta-Auxiliary Learning [17.980649681325406]
We present Point-TTA, a novel test-time adaptation framework for point cloud registration (PCR)
Our model can adapt to unseen distributions at test-time without requiring any prior knowledge of the test data.
During training, our model is trained using a meta-auxiliary learning approach, such that the adapted model via auxiliary tasks improves the accuracy of the primary task.
arXiv Detail & Related papers (2023-08-31T06:32:11Z) - Consistency Regularization for Generalizable Source-free Domain
Adaptation [62.654883736925456]
Source-free domain adaptation (SFDA) aims to adapt a well-trained source model to an unlabelled target domain without accessing the source dataset.
Existing SFDA methods ONLY assess their adapted models on the target training set, neglecting the data from unseen but identically distributed testing sets.
We propose a consistency regularization framework to develop a more generalizable SFDA method.
arXiv Detail & Related papers (2023-08-03T07:45:53Z) - Universal Semi-supervised Model Adaptation via Collaborative Consistency
Training [92.52892510093037]
We introduce a realistic and challenging domain adaptation problem called Universal Semi-supervised Model Adaptation (USMA)
We propose a collaborative consistency training framework that regularizes the prediction consistency between two models.
Experimental results demonstrate the effectiveness of our method on several benchmark datasets.
arXiv Detail & Related papers (2023-07-07T08:19:40Z) - A Comprehensive Survey on Test-Time Adaptation under Distribution Shifts [143.14128737978342]
Test-time adaptation, an emerging paradigm, has the potential to adapt a pre-trained model to unlabeled data during testing, before making predictions.
Recent progress in this paradigm highlights the significant benefits of utilizing unlabeled data for training self-adapted models prior to inference.
arXiv Detail & Related papers (2023-03-27T16:32:21Z) - IDANI: Inference-time Domain Adaptation via Neuron-level Interventions [24.60778570114818]
We propose a new approach for domain adaptation (DA), using neuron-level interventions.
We modify the representation of each test example in specific neurons, resulting in a counterfactual example from the source domain.
Our experiments show that our method improves performance on unseen domains.
arXiv Detail & Related papers (2022-06-01T06:39:28Z) - Adaptive Methods for Real-World Domain Generalization [32.030688845421594]
In our work, we investigate whether it is possible to leverage domain information from unseen test samples themselves.
We propose a domain-adaptive approach consisting of two steps: a) we first learn a discriminative domain embedding from unsupervised training examples, and b) use this domain embedding as supplementary information to build a domain-adaptive model.
Our approach achieves state-of-the-art performance on various domain generalization benchmarks.
arXiv Detail & Related papers (2021-03-29T17:44:35Z) - One for More: Selecting Generalizable Samples for Generalizable ReID
Model [92.40951770273972]
This paper proposes a one-for-more training objective that takes the generalization ability of selected samples as a loss function.
Our proposed one-for-more based sampler can be seamlessly integrated into the ReID training framework.
arXiv Detail & Related papers (2020-12-10T06:37:09Z) - Adaptive Risk Minimization: Learning to Adapt to Domain Shift [109.87561509436016]
A fundamental assumption of most machine learning algorithms is that the training and test data are drawn from the same underlying distribution.
In this work, we consider the problem setting of domain generalization, where the training data are structured into domains and there may be multiple test time shifts.
We introduce the framework of adaptive risk minimization (ARM), in which models are directly optimized for effective adaptation to shift by learning to adapt on the training domains.
arXiv Detail & Related papers (2020-07-06T17:59:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.