Test-Time Training with Masked Autoencoders
- URL: http://arxiv.org/abs/2209.07522v1
- Date: Thu, 15 Sep 2022 17:59:34 GMT
- Title: Test-Time Training with Masked Autoencoders
- Authors: Yossi Gandelsman, Yu Sun, Xinlei Chen, Alexei A. Efros
- Abstract summary: Test-time training adapts to a new test distribution on the fly by optimizing a model for each test input using self-supervision.
In this paper, we use masked autoencoders for this one-sample learning problem.
- Score: 54.983147122777574
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Test-time training adapts to a new test distribution on the fly by optimizing
a model for each test input using self-supervision. In this paper, we use
masked autoencoders for this one-sample learning problem. Empirically, our
simple method improves generalization on many visual benchmarks for
distribution shifts. Theoretically, we characterize this improvement in terms
of the bias-variance trade-off.
Related papers
- Point-TTA: Test-Time Adaptation for Point Cloud Registration Using
Multitask Meta-Auxiliary Learning [17.980649681325406]
We present Point-TTA, a novel test-time adaptation framework for point cloud registration (PCR)
Our model can adapt to unseen distributions at test-time without requiring any prior knowledge of the test data.
During training, our model is trained using a meta-auxiliary learning approach, such that the adapted model via auxiliary tasks improves the accuracy of the primary task.
arXiv Detail & Related papers (2023-08-31T06:32:11Z) - A Comprehensive Survey on Test-Time Adaptation under Distribution Shifts [143.14128737978342]
Test-time adaptation, an emerging paradigm, has the potential to adapt a pre-trained model to unlabeled data during testing, before making predictions.
Recent progress in this paradigm highlights the significant benefits of utilizing unlabeled data for training self-adapted models prior to inference.
arXiv Detail & Related papers (2023-03-27T16:32:21Z) - Test-Time Prompt Tuning for Zero-Shot Generalization in Vision-Language
Models [107.05966685291067]
We propose test-time prompt tuning (TPT) to learn adaptive prompts on the fly with a single test sample.
TPT improves the zero-shot top-1 accuracy of CLIP by 3.6% on average.
In evaluating cross-dataset generalization with unseen categories, TPT performs on par with the state-of-the-art approaches that use additional training data.
arXiv Detail & Related papers (2022-09-15T17:55:11Z) - TTAPS: Test-Time Adaption by Aligning Prototypes using Self-Supervision [70.05605071885914]
We propose a novel modification of the self-supervised training algorithm SwAV that adds the ability to adapt to single test samples.
We show the success of our method on the common benchmark dataset CIFAR10-C.
arXiv Detail & Related papers (2022-05-18T05:43:06Z) - Autoencoding Variational Autoencoder [56.05008520271406]
We study the implications of this behaviour on the learned representations and also the consequences of fixing it by introducing a notion of self consistency.
We show that encoders trained with our self-consistency approach lead to representations that are robust (insensitive) to perturbations in the input introduced by adversarial attacks.
arXiv Detail & Related papers (2020-12-07T14:16:14Z) - Robust Sampling in Deep Learning [62.997667081978825]
Deep learning requires regularization mechanisms to reduce overfitting and improve generalization.
We address this problem by a new regularization method based on distributional robust optimization.
During the training, the selection of samples is done according to their accuracy in such a way that the worst performed samples are the ones that contribute the most in the optimization.
arXiv Detail & Related papers (2020-06-04T09:46:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.