Boost Test-Time Performance with Closed-Loop Inference
- URL: http://arxiv.org/abs/2203.10853v1
- Date: Mon, 21 Mar 2022 10:20:21 GMT
- Title: Boost Test-Time Performance with Closed-Loop Inference
- Authors: Shuaicheng Niu and Jiaxiang Wu and Yifan Zhang and Guanghui Xu and
Haokun Li and Junzhou Huang and Yaowei Wang and Mingkui Tan
- Abstract summary: We propose to predict hard-classified test samples in a looped manner to boost the model performance.
We first devise a filtering criterion to identify those hard-classified test samples that need additional inference loops.
For each hard sample, we construct an additional auxiliary learning task based on its original top-$K$ predictions to calibrate the model.
- Score: 85.43516360332646
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Conventional deep models predict a test sample with a single forward
propagation, which, however, may not be sufficient for predicting
hard-classified samples. On the contrary, we human beings may need to carefully
check the sample many times before making a final decision. During the recheck
process, one may refine/adjust the prediction by referring to related samples.
Motivated by this, we propose to predict those hard-classified test samples in
a looped manner to boost the model performance. However, this idea may pose a
critical challenge: how to construct looped inference, so that the original
erroneous predictions on these hard test samples can be corrected with little
additional effort. To address this, we propose a general Closed-Loop Inference
(CLI) method. Specifically, we first devise a filtering criterion to identify
those hard-classified test samples that need additional inference loops. For
each hard sample, we construct an additional auxiliary learning task based on
its original top-$K$ predictions to calibrate the model, and then use the
calibrated model to obtain the final prediction. Promising results on ImageNet
(in-distribution test samples) and ImageNet-C (out-of-distribution test
samples) demonstrate the effectiveness of CLI in improving the performance of
any pre-trained model.
Related papers
- DOTA: Distributional Test-Time Adaptation of Vision-Language Models [52.98590762456236]
Training-free test-time dynamic adapter (TDA) is a promising approach to address this issue.
We propose a simple yet effective method for DistributiOnal Test-time Adaptation (Dota)
Dota continually estimates the distributions of test samples, allowing the model to continually adapt to the deployment environment.
arXiv Detail & Related papers (2024-09-28T15:03:28Z) - A3Rank: Augmentation Alignment Analysis for Prioritizing Overconfident Failing Samples for Deep Learning Models [2.6499018693213316]
We propose a novel test case prioritization technique with augmentation alignment analysis.
$A3$Rank can effectively rank failing samples escaping from the checking of confidence-based rejectors.
We also provide a framework to construct a detector devoted to augmenting these rejectors to defend these failing samples.
arXiv Detail & Related papers (2024-07-19T08:32:10Z) - Uncertainty-Calibrated Test-Time Model Adaptation without Forgetting [55.17761802332469]
Test-time adaptation (TTA) seeks to tackle potential distribution shifts between training and test data by adapting a given model w.r.t. any test sample.
Prior methods perform backpropagation for each test sample, resulting in unbearable optimization costs to many applications.
We propose an Efficient Anti-Forgetting Test-Time Adaptation (EATA) method which develops an active sample selection criterion to identify reliable and non-redundant samples.
arXiv Detail & Related papers (2024-03-18T05:49:45Z) - TTAPS: Test-Time Adaption by Aligning Prototypes using Self-Supervision [70.05605071885914]
We propose a novel modification of the self-supervised training algorithm SwAV that adds the ability to adapt to single test samples.
We show the success of our method on the common benchmark dataset CIFAR10-C.
arXiv Detail & Related papers (2022-05-18T05:43:06Z) - Efficient Test-Time Model Adaptation without Forgetting [60.36499845014649]
Test-time adaptation seeks to tackle potential distribution shifts between training and testing data.
We propose an active sample selection criterion to identify reliable and non-redundant samples.
We also introduce a Fisher regularizer to constrain important model parameters from drastic changes.
arXiv Detail & Related papers (2022-04-06T06:39:40Z) - MEMO: Test Time Robustness via Adaptation and Augmentation [131.28104376280197]
We study the problem of test time robustification, i.e., using the test input to improve model robustness.
Recent prior works have proposed methods for test time adaptation, however, they each introduce additional assumptions.
We propose a simple approach that can be used in any test setting where the model is probabilistic and adaptable.
arXiv Detail & Related papers (2021-10-18T17:55:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.