SITA: Single Image Test-time Adaptation
- URL: http://arxiv.org/abs/2112.02355v2
- Date: Wed, 8 Dec 2021 09:18:18 GMT
- Title: SITA: Single Image Test-time Adaptation
- Authors: Ansh Khurana, Sujoy Paul, Piyush Rai, Soma Biswas, Gaurav Aggarwal
- Abstract summary: In Test-time Adaptation (TTA), given a model trained on some source data, the goal is to adapt it to make better predictions for test instances from a different distribution.
We consider TTA in a more pragmatic setting which we refer to as SITA (Single Image Test-time Adaptation)
Here, when making each prediction, the model has access only to the given single test instance, rather than a batch of instances.
We propose a novel approach AugBN for the SITA setting that requires only forward-preserving propagation.
- Score: 48.789568233682296
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In Test-time Adaptation (TTA), given a model trained on some source data, the
goal is to adapt it to make better predictions for test instances from a
different distribution. Crucially, TTA assumes no access to the source data or
even any additional labeled/unlabeled samples from the target distribution to
finetune the source model. In this work, we consider TTA in a more pragmatic
setting which we refer to as SITA (Single Image Test-time Adaptation). Here,
when making each prediction, the model has access only to the given single test
instance, rather than a batch of instances, as has typically been considered in
the literature. This is motivated by the realistic scenarios where inference is
needed in an on-demand fashion that may not be delayed to "batch-ify" incoming
requests or the inference is happening on an edge device (like mobile phone)
where there is no scope for batching. The entire adaptation process in SITA
should be extremely fast as it happens at inference time. To address this, we
propose a novel approach AugBN for the SITA setting that requires only forward
propagation. The approach can adapt any off-the-shelf trained model to
individual test instances for both classification and segmentation tasks. AugBN
estimates normalisation statistics of the unseen test distribution from the
given test image using only one forward pass with label-preserving
transformations. Since AugBN does not involve any back-propagation, it is
significantly faster compared to other recent methods. To the best of our
knowledge, this is the first work that addresses this hard adaptation problem
using only a single test image. Despite being very simple, our framework is
able to achieve significant performance gains compared to directly applying the
source model on the target instances, as reflected in our extensive experiments
and ablation studies.
Related papers
- AdaNPC: Exploring Non-Parametric Classifier for Test-Time Adaptation [64.9230895853942]
Domain generalization can be arbitrarily hard without exploiting target domain information.
Test-time adaptive (TTA) methods are proposed to address this issue.
In this work, we adopt Non-Parametric to perform the test-time Adaptation (AdaNPC)
arXiv Detail & Related papers (2023-04-25T04:23:13Z) - Robust Test-Time Adaptation in Dynamic Scenarios [9.475271284789969]
Test-time adaptation (TTA) intends to adapt the pretrained model to test distributions with only unlabeled test data streams.
We elaborate a Robust Test-Time Adaptation (RoTTA) method against the complex data stream in PTTA.
Our method is easy to implement, making it a good choice for rapid deployment.
arXiv Detail & Related papers (2023-03-24T10:19:14Z) - Robust Continual Test-time Adaptation: Instance-aware BN and
Prediction-balanced Memory [58.72445309519892]
We present a new test-time adaptation scheme that is robust against non-i.i.d. test data streams.
Our novelty is mainly two-fold: (a) Instance-Aware Batch Normalization (IABN) that corrects normalization for out-of-distribution samples, and (b) Prediction-balanced Reservoir Sampling (PBRS) that simulates i.i.d. data stream from non-i.i.d. stream in a class-balanced manner.
arXiv Detail & Related papers (2022-08-10T03:05:46Z) - CAFA: Class-Aware Feature Alignment for Test-Time Adaptation [50.26963784271912]
Test-time adaptation (TTA) aims to address this challenge by adapting a model to unlabeled data at test time.
We propose a simple yet effective feature alignment loss, termed as Class-Aware Feature Alignment (CAFA), which simultaneously encourages a model to learn target representations in a class-discriminative manner.
arXiv Detail & Related papers (2022-06-01T03:02:07Z) - Listen, Adapt, Better WER: Source-free Single-utterance Test-time
Adaptation for Automatic Speech Recognition [65.84978547406753]
Test-time Adaptation aims to adapt the model trained on source domains to yield better predictions for test samples.
Single-Utterance Test-time Adaptation (SUTA) is the first TTA study in speech area to our best knowledge.
arXiv Detail & Related papers (2022-03-27T06:38:39Z) - On-the-Fly Test-time Adaptation for Medical Image Segmentation [63.476899335138164]
Adapting the source model to target data distribution at test-time is an efficient solution for the data-shift problem.
We propose a new framework called Adaptive UNet where each convolutional block is equipped with an adaptive batch normalization layer.
During test-time, the model takes in just the new test image and generates a domain code to adapt the features of source model according to the test data.
arXiv Detail & Related papers (2022-03-10T18:51:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.