CAFA: Class-Aware Feature Alignment for Test-Time Adaptation
- URL: http://arxiv.org/abs/2206.00205v3
- Date: Mon, 4 Sep 2023 02:55:32 GMT
- Title: CAFA: Class-Aware Feature Alignment for Test-Time Adaptation
- Authors: Sanghun Jung, Jungsoo Lee, Nanhee Kim, Amirreza Shaban, Byron Boots,
Jaegul Choo
- Abstract summary: Test-time adaptation (TTA) aims to address this challenge by adapting a model to unlabeled data at test time.
We propose a simple yet effective feature alignment loss, termed as Class-Aware Feature Alignment (CAFA), which simultaneously encourages a model to learn target representations in a class-discriminative manner.
- Score: 50.26963784271912
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite recent advancements in deep learning, deep neural networks continue
to suffer from performance degradation when applied to new data that differs
from training data. Test-time adaptation (TTA) aims to address this challenge
by adapting a model to unlabeled data at test time. TTA can be applied to
pretrained networks without modifying their training procedures, enabling them
to utilize a well-formed source distribution for adaptation. One possible
approach is to align the representation space of test samples to the source
distribution (\textit{i.e.,} feature alignment). However, performing feature
alignment in TTA is especially challenging in that access to labeled source
data is restricted during adaptation. That is, a model does not have a chance
to learn test data in a class-discriminative manner, which was feasible in
other adaptation tasks (\textit{e.g.,} unsupervised domain adaptation) via
supervised losses on the source data. Based on this observation, we propose a
simple yet effective feature alignment loss, termed as Class-Aware Feature
Alignment (CAFA), which simultaneously 1) encourages a model to learn target
representations in a class-discriminative manner and 2) effectively mitigates
the distribution shifts at test time. Our method does not require any
hyper-parameters or additional losses, which are required in previous
approaches. We conduct extensive experiments on 6 different datasets and show
our proposed method consistently outperforms existing baselines.
Related papers
- BoostAdapter: Improving Vision-Language Test-Time Adaptation via Regional Bootstrapping [64.8477128397529]
We propose a training-required and training-free test-time adaptation framework.
We maintain a light-weight key-value memory for feature retrieval from instance-agnostic historical samples and instance-aware boosting samples.
We theoretically justify the rationality behind our method and empirically verify its effectiveness on both the out-of-distribution and the cross-domain datasets.
arXiv Detail & Related papers (2024-10-20T15:58:43Z) - Distribution Alignment for Fully Test-Time Adaptation with Dynamic Online Data Streams [19.921480334048756]
Test-Time Adaptation (TTA) enables adaptation and inference in test data streams with domain shifts from the source.
We propose a novel Distribution Alignment loss for TTA.
We surpass existing methods in non-i.i.d. scenarios and maintain competitive performance under the ideal i.i.d. assumption.
arXiv Detail & Related papers (2024-07-16T19:33:23Z) - Data Adaptive Traceback for Vision-Language Foundation Models in Image Classification [34.37262622415682]
We propose a new adaptation framework called Data Adaptive Traceback.
Specifically, we utilize a zero-shot-based method to extract the most downstream task-related subset of the pre-training data.
We adopt a pseudo-label-based semi-supervised technique to reuse the pre-training images and a vision-language contrastive learning method to address the confirmation bias issue in semi-supervised learning.
arXiv Detail & Related papers (2024-07-11T18:01:58Z) - Channel-Selective Normalization for Label-Shift Robust Test-Time Adaptation [16.657929958093824]
Test-time adaptation is an approach to adjust models to a new data distribution during inference.
Test-time batch normalization is a simple and popular method that achieved compelling performance on domain shift benchmarks.
We propose to tackle this challenge by only selectively adapting channels in a deep network, minimizing drastic adaptation that is sensitive to label shifts.
arXiv Detail & Related papers (2024-02-07T15:41:01Z) - A Comprehensive Survey on Test-Time Adaptation under Distribution Shifts [143.14128737978342]
Test-time adaptation, an emerging paradigm, has the potential to adapt a pre-trained model to unlabeled data during testing, before making predictions.
Recent progress in this paradigm highlights the significant benefits of utilizing unlabeled data for training self-adapted models prior to inference.
arXiv Detail & Related papers (2023-03-27T16:32:21Z) - DELTA: degradation-free fully test-time adaptation [59.74287982885375]
We find that two unfavorable defects are concealed in the prevalent adaptation methodologies like test-time batch normalization (BN) and self-learning.
First, we reveal that the normalization statistics in test-time BN are completely affected by the currently received test samples, resulting in inaccurate estimates.
Second, we show that during test-time adaptation, the parameter update is biased towards some dominant classes.
arXiv Detail & Related papers (2023-01-30T15:54:00Z) - TeST: Test-time Self-Training under Distribution Shift [99.68465267994783]
Test-Time Self-Training (TeST) is a technique that takes as input a model trained on some source data and a novel data distribution at test time.
We find that models adapted using TeST significantly improve over baseline test-time adaptation algorithms.
arXiv Detail & Related papers (2022-09-23T07:47:33Z) - Listen, Adapt, Better WER: Source-free Single-utterance Test-time
Adaptation for Automatic Speech Recognition [65.84978547406753]
Test-time Adaptation aims to adapt the model trained on source domains to yield better predictions for test samples.
Single-Utterance Test-time Adaptation (SUTA) is the first TTA study in speech area to our best knowledge.
arXiv Detail & Related papers (2022-03-27T06:38:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.