Test-Time Adaptation via Self-Training with Nearest Neighbor Information
- URL: http://arxiv.org/abs/2207.10792v1
- Date: Fri, 8 Jul 2022 05:02:15 GMT
- Title: Test-Time Adaptation via Self-Training with Nearest Neighbor Information
- Authors: Minguk Jang, Sae-Young Chung
- Abstract summary: Adapting trained classifiers using only online test data is important.
One of the popular approaches for test-time adaptation is self-training.
We propose a novel test-time adaptation method Test-time Adaptation via Self-Training with nearest neighbor information.
- Score: 16.346069386394703
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adapting trained classifiers using only online test data is important since
it is difficult to access training data or future test data during test time.
One of the popular approaches for test-time adaptation is self-training, which
fine-tunes the trained classifiers using the classifier predictions of the test
data as pseudo labels. However, under the test-time domain shift, self-training
methods have a limitation that learning with inaccurate pseudo labels greatly
degrades the performance of the adapted classifiers. To overcome this
limitation, we propose a novel test-time adaptation method Test-time Adaptation
via Self-Training with nearest neighbor information (TAST). Based on the idea
that a test data and its nearest neighbors in the embedding space of the
trained classifier are more likely to have the same label, we adapt the trained
classifier with the following two steps: (1) generate the pseudo label for the
test data using its nearest neighbors from a set composed of previous test
data, and (2) fine-tune the trained classifier with the pseudo label. Our
experiments on two standard benchmarks, i.e., domain generalization and image
corruption benchmarks, show that TAST outperforms the current state-of-the-art
test-time adaptation methods.
Related papers
- Test-Time Adaptation with Binary Feedback [50.20923012663613]
BiTTA is a novel dual-path optimization framework that balances binary feedback-guided adaptation on uncertain samples with agreement-based self-adaptation on confident predictions.<n> Experiments show BiTTA achieves 13.3%p accuracy improvements over state-of-the-art baselines.
arXiv Detail & Related papers (2025-05-24T05:24:10Z) - TestNUC: Enhancing Test-Time Computing Approaches through Neighboring Unlabeled Data Consistency [42.81348222668079]
TestNUC improves test-time predictions by leveraging the local consistency of neighboring unlabeled data.
TestNUC can be seamlessly integrated with existing test-time computing approaches.
arXiv Detail & Related papers (2025-02-26T14:17:56Z) - Evaluating the fairness of task-adaptive pretraining on unlabeled test data before few-shot text classification [0.0]
Few-shot learning benchmarks are critical for evaluating modern NLP techniques.
It is possible, however, that benchmarks favor methods which easily make use of unlabeled text.
We run experiments to quantify the bias caused by pretraining on unlabeled test set text.
arXiv Detail & Related papers (2024-09-30T19:32:10Z) - STAMP: Outlier-Aware Test-Time Adaptation with Stable Memory Replay [76.06127233986663]
Test-time adaptation (TTA) aims to address the distribution shift between the training and test data with only unlabeled data at test time.
This paper pays attention to the problem that conducts both sample recognition and outlier rejection during inference while outliers exist.
We propose a new approach called STAble Memory rePlay (STAMP), which performs optimization over a stable memory bank instead of the risky mini-batch.
arXiv Detail & Related papers (2024-07-22T16:25:41Z) - Efficient Test-Time Adaptation of Vision-Language Models [58.3646257833533]
Test-time adaptation with pre-trained vision-language models has attracted increasing attention for tackling distribution shifts during the test time.
We design TDA, a training-free dynamic adapter that enables effective and efficient test-time adaptation with vision-language models.
arXiv Detail & Related papers (2024-03-27T06:37:51Z) - A Comprehensive Survey on Test-Time Adaptation under Distribution Shifts [143.14128737978342]
Test-time adaptation, an emerging paradigm, has the potential to adapt a pre-trained model to unlabeled data during testing, before making predictions.
Recent progress in this paradigm highlights the significant benefits of utilizing unlabeled data for training self-adapted models prior to inference.
arXiv Detail & Related papers (2023-03-27T16:32:21Z) - Feature Alignment and Uniformity for Test Time Adaptation [8.209137567840811]
Test time adaptation aims to adapt deep neural networks when receiving out of distribution test domain samples.
In this setting, the model can only access online unlabeled test samples and pre-trained models on the training domains.
arXiv Detail & Related papers (2023-03-20T06:44:49Z) - TeST: Test-time Self-Training under Distribution Shift [99.68465267994783]
Test-Time Self-Training (TeST) is a technique that takes as input a model trained on some source data and a novel data distribution at test time.
We find that models adapted using TeST significantly improve over baseline test-time adaptation algorithms.
arXiv Detail & Related papers (2022-09-23T07:47:33Z) - CAFA: Class-Aware Feature Alignment for Test-Time Adaptation [50.26963784271912]
Test-time adaptation (TTA) aims to address this challenge by adapting a model to unlabeled data at test time.
We propose a simple yet effective feature alignment loss, termed as Class-Aware Feature Alignment (CAFA), which simultaneously encourages a model to learn target representations in a class-discriminative manner.
arXiv Detail & Related papers (2022-06-01T03:02:07Z) - Robustness to Spurious Correlations in Text Classification via
Automatically Generated Counterfactuals [8.827892752465958]
We propose to train a robust text classifier by augmenting the training data with automatically generated counterfactual data.
We show that the robust classifier makes meaningful and trustworthy predictions by emphasizing causal features and de-emphasizing non-causal features.
arXiv Detail & Related papers (2020-12-18T03:57:32Z) - Certified Robustness to Label-Flipping Attacks via Randomized Smoothing [105.91827623768724]
Machine learning algorithms are susceptible to data poisoning attacks.
We present a unifying view of randomized smoothing over arbitrary functions.
We propose a new strategy for building classifiers that are pointwise-certifiably robust to general data poisoning attacks.
arXiv Detail & Related papers (2020-02-07T21:28:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.