AdaNPC: Exploring Non-Parametric Classifier for Test-Time Adaptation
- URL: http://arxiv.org/abs/2304.12566v2
- Date: Wed, 10 May 2023 01:12:05 GMT
- Title: AdaNPC: Exploring Non-Parametric Classifier for Test-Time Adaptation
- Authors: Yi-Fan Zhang, Xue Wang, Kexin Jin, Kun Yuan, Zhang Zhang, Liang Wang,
Rong Jin, Tieniu Tan
- Abstract summary: Domain generalization can be arbitrarily hard without exploiting target domain information.
Test-time adaptive (TTA) methods are proposed to address this issue.
In this work, we adopt Non-Parametric to perform the test-time Adaptation (AdaNPC)
- Score: 64.9230895853942
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many recent machine learning tasks focus to develop models that can
generalize to unseen distributions. Domain generalization (DG) has become one
of the key topics in various fields. Several literatures show that DG can be
arbitrarily hard without exploiting target domain information. To address this
issue, test-time adaptive (TTA) methods are proposed. Existing TTA methods
require offline target data or extra sophisticated optimization procedures
during the inference stage. In this work, we adopt Non-Parametric Classifier to
perform the test-time Adaptation (AdaNPC). In particular, we construct a memory
that contains the feature and label pairs from training domains. During
inference, given a test instance, AdaNPC first recalls K closed samples from
the memory to vote for the prediction, and then the test feature and predicted
label are added to the memory. In this way, the sample distribution in the
memory can be gradually changed from the training distribution towards the test
distribution with very little extra computation cost. We theoretically justify
the rationality behind the proposed method. Besides, we test our model on
extensive numerical experiments. AdaNPC significantly outperforms competitive
baselines on various DG benchmarks. In particular, when the adaptation target
is a series of domains, the adaptation accuracy of AdaNPC is 50% higher than
advanced TTA methods. The code is available at
https://github.com/yfzhang114/AdaNPC.
Related papers
- BoostAdapter: Improving Vision-Language Test-Time Adaptation via Regional Bootstrapping [64.8477128397529]
We propose a training-required and training-free test-time adaptation framework.
We maintain a light-weight key-value memory for feature retrieval from instance-agnostic historical samples and instance-aware boosting samples.
We theoretically justify the rationality behind our method and empirically verify its effectiveness on both the out-of-distribution and the cross-domain datasets.
arXiv Detail & Related papers (2024-10-20T15:58:43Z) - STAMP: Outlier-Aware Test-Time Adaptation with Stable Memory Replay [76.06127233986663]
Test-time adaptation (TTA) aims to address the distribution shift between the training and test data with only unlabeled data at test time.
This paper pays attention to the problem that conducts both sample recognition and outlier rejection during inference while outliers exist.
We propose a new approach called STAble Memory rePlay (STAMP), which performs optimization over a stable memory bank instead of the risky mini-batch.
arXiv Detail & Related papers (2024-07-22T16:25:41Z) - Explaining Cross-Domain Recognition with Interpretable Deep Classifier [100.63114424262234]
Interpretable Deep (IDC) learns the nearest source samples of a target sample as evidence upon which the classifier makes the decision.
Our IDC leads to a more explainable model with almost no accuracy degradation and effectively calibrates classification for optimum reject options.
arXiv Detail & Related papers (2022-11-15T15:58:56Z) - CAFA: Class-Aware Feature Alignment for Test-Time Adaptation [50.26963784271912]
Test-time adaptation (TTA) aims to address this challenge by adapting a model to unlabeled data at test time.
We propose a simple yet effective feature alignment loss, termed as Class-Aware Feature Alignment (CAFA), which simultaneously encourages a model to learn target representations in a class-discriminative manner.
arXiv Detail & Related papers (2022-06-01T03:02:07Z) - SITA: Single Image Test-time Adaptation [48.789568233682296]
In Test-time Adaptation (TTA), given a model trained on some source data, the goal is to adapt it to make better predictions for test instances from a different distribution.
We consider TTA in a more pragmatic setting which we refer to as SITA (Single Image Test-time Adaptation)
Here, when making each prediction, the model has access only to the given single test instance, rather than a batch of instances.
We propose a novel approach AugBN for the SITA setting that requires only forward-preserving propagation.
arXiv Detail & Related papers (2021-12-04T15:01:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.