Learning Loss for Test-Time Augmentation
- URL: http://arxiv.org/abs/2010.11422v1
- Date: Thu, 22 Oct 2020 03:56:34 GMT
- Title: Learning Loss for Test-Time Augmentation
- Authors: Ildoo Kim, Younghoon Kim, Sungwoong Kim
- Abstract summary: This paper proposes a novel instance-level test-time augmentation that efficiently selects suitable transformations for a test input.
Experimental results on several image classification benchmarks show that the proposed instance-aware test-time augmentation improves the model's robustness against various corruptions.
- Score: 25.739449801033846
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Data augmentation has been actively studied for robust neural networks. Most
of the recent data augmentation methods focus on augmenting datasets during the
training phase. At the testing phase, simple transformations are still widely
used for test-time augmentation. This paper proposes a novel instance-level
test-time augmentation that efficiently selects suitable transformations for a
test input. Our proposed method involves an auxiliary module to predict the
loss of each possible transformation given the input. Then, the transformations
having lower predicted losses are applied to the input. The network obtains the
results by averaging the prediction results of augmented inputs. Experimental
results on several image classification benchmarks show that the proposed
instance-aware test-time augmentation improves the model's robustness against
various corruptions.
Related papers
- Learning on Transformers is Provable Low-Rank and Sparse: A One-layer Analysis [63.66763657191476]
We show that efficient numerical training and inference algorithms as low-rank computation have impressive performance for learning Transformer-based adaption.
We analyze how magnitude-based models affect generalization while improving adaption.
We conclude that proper magnitude-based has a slight on the testing performance.
arXiv Detail & Related papers (2024-06-24T23:00:58Z) - Diverse Data Augmentation with Diffusions for Effective Test-time Prompt
Tuning [73.75282761503581]
We propose DiffTPT, which leverages pre-trained diffusion models to generate diverse and informative new data.
Our experiments on test datasets with distribution shifts and unseen categories demonstrate that DiffTPT improves the zero-shot accuracy by an average of 5.13%.
arXiv Detail & Related papers (2023-08-11T09:36:31Z) - Improved Text Classification via Test-Time Augmentation [2.493374942115722]
Test-time augmentation is an established technique to improve the performance of image classification models.
We present augmentation policies that yield significant accuracy improvements with language models.
Experiments across a binary classification task and dataset show that test-time augmentation can deliver consistent improvements.
arXiv Detail & Related papers (2022-06-27T19:57:27Z) - CAFA: Class-Aware Feature Alignment for Test-Time Adaptation [50.26963784271912]
Test-time adaptation (TTA) aims to address this challenge by adapting a model to unlabeled data at test time.
We propose a simple yet effective feature alignment loss, termed as Class-Aware Feature Alignment (CAFA), which simultaneously encourages a model to learn target representations in a class-discriminative manner.
arXiv Detail & Related papers (2022-06-01T03:02:07Z) - TTAPS: Test-Time Adaption by Aligning Prototypes using Self-Supervision [70.05605071885914]
We propose a novel modification of the self-supervised training algorithm SwAV that adds the ability to adapt to single test samples.
We show the success of our method on the common benchmark dataset CIFAR10-C.
arXiv Detail & Related papers (2022-05-18T05:43:06Z) - Efficient Test-Time Model Adaptation without Forgetting [60.36499845014649]
Test-time adaptation seeks to tackle potential distribution shifts between training and testing data.
We propose an active sample selection criterion to identify reliable and non-redundant samples.
We also introduce a Fisher regularizer to constrain important model parameters from drastic changes.
arXiv Detail & Related papers (2022-04-06T06:39:40Z) - MEMO: Test Time Robustness via Adaptation and Augmentation [131.28104376280197]
We study the problem of test time robustification, i.e., using the test input to improve model robustness.
Recent prior works have proposed methods for test time adaptation, however, they each introduce additional assumptions.
We propose a simple approach that can be used in any test setting where the model is probabilistic and adaptable.
arXiv Detail & Related papers (2021-10-18T17:55:11Z) - Test-Time Adaptation to Distribution Shift by Confidence Maximization
and Input Transformation [44.494319305269535]
neural networks often exhibit poor performance on data unlikely under the train-time data distribution.
This paper focuses on the fully test-time adaptation setting, where only unlabeled data from the target distribution is required.
We propose a novel loss that improves test-time adaptation by addressing both premature convergence and instability of entropy minimization.
arXiv Detail & Related papers (2021-06-28T22:06:10Z) - Better Aggregation in Test-Time Augmentation [4.259219671110274]
Test-time augmentation is the aggregation of predictions across transformed versions of a test input.
A key finding is that even when test-time augmentation produces a net improvement in accuracy, it can change many correct predictions into incorrect predictions.
We present a learning-based method for aggregating test-time augmentations.
arXiv Detail & Related papers (2020-11-23T00:46:00Z) - Self-paced Data Augmentation for Training Neural Networks [11.554821454921536]
We propose a self-paced augmentation to automatically select suitable samples for data augmentation when training a neural network.
The proposed method mitigates the deterioration of generalization performance caused by ineffective data augmentation.
Experimental results demonstrate that the proposed SPA can improve the generalization performance, particularly when the number of training samples is small.
arXiv Detail & Related papers (2020-10-29T09:13:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.