Energy-Based Test Sample Adaptation for Domain Generalization
- URL: http://arxiv.org/abs/2302.11215v1
- Date: Wed, 22 Feb 2023 08:55:09 GMT
- Title: Energy-Based Test Sample Adaptation for Domain Generalization
- Authors: Zehao Xiao, Xiantong Zhen, Shengcai Liao, Cees G. M. Snoek
- Abstract summary: We propose energy-based sample adaptation at test time for domain.
To adapt target samples to source distributions, we iteratively update the samples by energy minimization.
Experiments on six benchmarks for classification of images and microblog threads demonstrate the effectiveness of our proposal.
- Score: 81.04943285281072
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we propose energy-based sample adaptation at test time for
domain generalization. Where previous works adapt their models to target
domains, we adapt the unseen target samples to source-trained models. To this
end, we design a discriminative energy-based model, which is trained on source
domains to jointly model the conditional distribution for classification and
data distribution for sample adaptation. The model is optimized to
simultaneously learn a classifier and an energy function. To adapt target
samples to source distributions, we iteratively update the samples by energy
minimization with stochastic gradient Langevin dynamics. Moreover, to preserve
the categorical information in the sample during adaptation, we introduce a
categorical latent variable into the energy-based model. The latent variable is
learned from the original sample before adaptation by variational inference and
fixed as a condition to guide the sample update. Experiments on six benchmarks
for classification of images and microblog threads demonstrate the
effectiveness of our proposal.
Related papers
- DOTA: Distributional Test-Time Adaptation of Vision-Language Models [52.98590762456236]
Training-free test-time dynamic adapter (TDA) is a promising approach to address this issue.
We propose a simple yet effective method for DistributiOnal Test-time Adaptation (Dota)
Dota continually estimates the distributions of test samples, allowing the model to continually adapt to the deployment environment.
arXiv Detail & Related papers (2024-09-28T15:03:28Z) - Test-Time Model Adaptation with Only Forward Passes [68.11784295706995]
Test-time adaptation has proven effective in adapting a given trained model to unseen test samples with potential distribution shifts.
We propose a test-time Forward-Optimization Adaptation (FOA) method.
FOA runs on quantized 8-bit ViT, outperforms gradient-based TENT on full-precision 32-bit ViT, and achieves an up to 24-fold memory reduction on ImageNet-C.
arXiv Detail & Related papers (2024-04-02T05:34:33Z) - Variational Classification [51.2541371924591]
We derive a variational objective to train the model, analogous to the evidence lower bound (ELBO) used to train variational auto-encoders.
Treating inputs to the softmax layer as samples of a latent variable, our abstracted perspective reveals a potential inconsistency.
We induce a chosen latent distribution, instead of the implicit assumption found in a standard softmax layer.
arXiv Detail & Related papers (2023-05-17T17:47:19Z) - Generating High Fidelity Synthetic Data via Coreset selection and
Entropic Regularization [15.866662428675054]
We propose using a combination of coresets selection methods and entropic regularization'' to select the highest fidelity samples.
In a semi-supervised learning scenario, we show that augmenting the labeled data-set, by adding our selected subset of samples, leads to better accuracy improvement.
arXiv Detail & Related papers (2023-01-31T22:59:41Z) - Predicting Out-of-Domain Generalization with Neighborhood Invariance [59.05399533508682]
We propose a measure of a classifier's output invariance in a local transformation neighborhood.
Our measure is simple to calculate, does not depend on the test point's true label, and can be applied even in out-of-domain (OOD) settings.
In experiments on benchmarks in image classification, sentiment analysis, and natural language inference, we demonstrate a strong and robust correlation between our measure and actual OOD generalization.
arXiv Detail & Related papers (2022-07-05T14:55:16Z) - Learning to Generalize across Domains on Single Test Samples [126.9447368941314]
We learn to generalize across domains on single test samples.
We formulate the adaptation to the single test sample as a variational Bayesian inference problem.
Our model achieves at least comparable -- and often better -- performance than state-of-the-art methods on multiple benchmarks for domain generalization.
arXiv Detail & Related papers (2022-02-16T13:21:04Z) - Gradual Domain Adaptation in the Wild:When Intermediate Distributions
are Absent [32.906658998929394]
We focus on the problem of domain adaptation when the goal is shifting the model towards the target distribution.
We propose GIFT, a method that creates virtual samples from intermediate distributions by interpolating representations of examples from source and target domains.
arXiv Detail & Related papers (2021-06-10T22:47:06Z) - Exponential Tilting of Generative Models: Improving Sample Quality by
Training and Sampling from Latent Energy [6.767885381740952]
Our method constructs an energy function on the latent variable space that yields an energy function on samples produced by the pre-trained generative model.
We show that using our proposed method, we can greatly improve the sample quality of popular likelihood based generative models.
arXiv Detail & Related papers (2020-06-15T02:58:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.