TeST: Test-time Self-Training under Distribution Shift
- URL: http://arxiv.org/abs/2209.11459v1
- Date: Fri, 23 Sep 2022 07:47:33 GMT
- Title: TeST: Test-time Self-Training under Distribution Shift
- Authors: Samarth Sinha, Peter Gehler, Francesco Locatello, Bernt Schiele
- Abstract summary: Test-Time Self-Training (TeST) is a technique that takes as input a model trained on some source data and a novel data distribution at test time.
We find that models adapted using TeST significantly improve over baseline test-time adaptation algorithms.
- Score: 99.68465267994783
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite their recent success, deep neural networks continue to perform poorly
when they encounter distribution shifts at test time. Many recently proposed
approaches try to counter this by aligning the model to the new distribution
prior to inference. With no labels available this requires unsupervised
objectives to adapt the model on the observed test data. In this paper, we
propose Test-Time Self-Training (TeST): a technique that takes as input a model
trained on some source data and a novel data distribution at test time, and
learns invariant and robust representations using a student-teacher framework.
We find that models adapted using TeST significantly improve over baseline
test-time adaptation algorithms. TeST achieves competitive performance to
modern domain adaptation algorithms, while having access to 5-10x less data at
time of adaption. We thoroughly evaluate a variety of baselines on two tasks:
object detection and image segmentation and find that models adapted with TeST.
We find that TeST sets the new state-of-the art for test-time domain adaptation
algorithms.
Related papers
- DOTA: Distributional Test-Time Adaptation of Vision-Language Models [52.98590762456236]
Training-free test-time dynamic adapter (TDA) is a promising approach to address this issue.
We propose a simple yet effective method for DistributiOnal Test-time Adaptation (Dota)
Dota continually estimates the distributions of test samples, allowing the model to continually adapt to the deployment environment.
arXiv Detail & Related papers (2024-09-28T15:03:28Z) - Source-Free Test-Time Adaptation For Online Surface-Defect Detection [29.69030283193086]
We propose a novel test-time adaptation surface-defect detection approach.
It adapts pre-trained models to new domains and classes during inference.
Experiments demonstrate it outperforms state-of-the-art techniques.
arXiv Detail & Related papers (2024-08-18T14:24:05Z) - Test-Time Adaptation with Perturbation Consistency Learning [32.58879780726279]
We propose a simple test-time adaptation method to promote the model to make stable predictions for samples with distribution shifts.
Our method can achieve higher or comparable performance with less inference time over strong PLM backbones.
arXiv Detail & Related papers (2023-04-25T12:29:22Z) - A Comprehensive Survey on Test-Time Adaptation under Distribution Shifts [143.14128737978342]
Test-time adaptation, an emerging paradigm, has the potential to adapt a pre-trained model to unlabeled data during testing, before making predictions.
Recent progress in this paradigm highlights the significant benefits of utilizing unlabeled data for training self-adapted models prior to inference.
arXiv Detail & Related papers (2023-03-27T16:32:21Z) - A Probabilistic Framework for Lifelong Test-Time Adaptation [34.07074915005366]
Test-time adaptation (TTA) is the problem of updating a pre-trained source model at inference time given test input(s) from a different target domain.
We present PETAL (Probabilistic lifElong Test-time Adaptation with seLf-training prior), which solves lifelong TTA using a probabilistic approach.
Our method achieves better results than the current state-of-the-art for online lifelong test-time adaptation across various benchmarks.
arXiv Detail & Related papers (2022-12-19T18:42:19Z) - CAFA: Class-Aware Feature Alignment for Test-Time Adaptation [50.26963784271912]
Test-time adaptation (TTA) aims to address this challenge by adapting a model to unlabeled data at test time.
We propose a simple yet effective feature alignment loss, termed as Class-Aware Feature Alignment (CAFA), which simultaneously encourages a model to learn target representations in a class-discriminative manner.
arXiv Detail & Related papers (2022-06-01T03:02:07Z) - TTAPS: Test-Time Adaption by Aligning Prototypes using Self-Supervision [70.05605071885914]
We propose a novel modification of the self-supervised training algorithm SwAV that adds the ability to adapt to single test samples.
We show the success of our method on the common benchmark dataset CIFAR10-C.
arXiv Detail & Related papers (2022-05-18T05:43:06Z) - Efficient Test-Time Model Adaptation without Forgetting [60.36499845014649]
Test-time adaptation seeks to tackle potential distribution shifts between training and testing data.
We propose an active sample selection criterion to identify reliable and non-redundant samples.
We also introduce a Fisher regularizer to constrain important model parameters from drastic changes.
arXiv Detail & Related papers (2022-04-06T06:39:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.