Test-Time Adaptation with Principal Component Analysis
- URL: http://arxiv.org/abs/2209.05779v1
- Date: Tue, 13 Sep 2022 07:24:40 GMT
- Title: Test-Time Adaptation with Principal Component Analysis
- Authors: Thomas Cordier and Victor Bouvier and Gilles H\'enaff and C\'eline
Hudelot
- Abstract summary: We propose a Test-Time Adaptation with Principal Component Analysis (TTAwPCA)
TTAwPCA combines three components: the output of a given layer is using a Principal Component Analysis (PCA), filtered by a penalization of its singular values, and reconstructed with the PCA inverse transform.
Experiments on CIFAR-10-C and CIFAR- 100-C demonstrate the effectiveness and limits of our method.
- Score: 1.0323063834827415
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine Learning models are prone to fail when test data are different from
training data, a situation often encountered in real applications known as
distribution shift. While still valid, the training-time knowledge becomes less
effective, requiring a test-time adaptation to maintain high performance.
Following approaches that assume batch-norm layer and use their statistics for
adaptation, we propose a Test-Time Adaptation with Principal Component Analysis
(TTAwPCA), which presumes a fitted PCA and adapts at test time a spectral
filter based on the singular values of the PCA for robustness to corruptions.
TTAwPCA combines three components: the output of a given layer is decomposed
using a Principal Component Analysis (PCA), filtered by a penalization of its
singular values, and reconstructed with the PCA inverse transform. This generic
enhancement adds fewer parameters than current methods. Experiments on
CIFAR-10-C and CIFAR- 100-C demonstrate the effectiveness and limits of our
method using a unique filter of 2000 parameters.
Related papers
- Relative Entropy Pathwise Policy Optimization [56.86405621176669]
We show how to construct a value-gradient driven, on-policy algorithm that allow training Q-value models purely from on-policy data.<n>We propose Relative Entropy Pathwise Policy Optimization (REPPO), an efficient on-policy algorithm that combines the sample-efficiency of pathwise policy gradients with the simplicity and minimal memory footprint of standard on-policy learning.
arXiv Detail & Related papers (2025-07-15T06:24:07Z) - Accurate Parameter-Efficient Test-Time Adaptation for Time Series Forecasting [2.688011048756518]
Real-world time series often exhibit a non-stationary nature, degrading the performance of pre-trained forecasting models.<n>We propose PETSA, a method that adapts forecasters at test time by only updating small calibration modules on the input and output.<n>PETSA uses low-rank adapters and dynamic gating to adjust representations without retraining.
arXiv Detail & Related papers (2025-06-29T23:09:35Z) - TTAQ: Towards Stable Post-training Quantization in Continuous Domain Adaptation [3.7024647541541014]
Post-training quantization (PTQ) reduces excessive hardware cost by quantizing full-precision models into lower bit representations on a tiny calibration set.
Traditional PTQ methods typically encounter failure in dynamic and ever-changing real-world scenarios.
We propose a novel and stable quantization process for test-time adaptation (TTA), dubbed TTAQ, to address the performance degradation of traditional PTQ.
arXiv Detail & Related papers (2024-12-13T06:34:59Z) - ETAGE: Enhanced Test Time Adaptation with Integrated Entropy and Gradient Norms for Robust Model Performance [18.055032898349438]
Test time adaptation (TTA) equips deep learning models to handle unseen test data that deviates from the training distribution.
We introduce ETAGE, a refined TTA method that integrates entropy minimization with gradient norms and PLPD.
Our method prioritizes samples that are less likely to cause instability by combining high entropy with high gradient norms out of adaptation.
arXiv Detail & Related papers (2024-09-14T01:25:52Z) - Test-Time Model Adaptation with Only Forward Passes [68.11784295706995]
Test-time adaptation has proven effective in adapting a given trained model to unseen test samples with potential distribution shifts.
We propose a test-time Forward-Optimization Adaptation (FOA) method.
FOA runs on quantized 8-bit ViT, outperforms gradient-based TENT on full-precision 32-bit ViT, and achieves an up to 24-fold memory reduction on ImageNet-C.
arXiv Detail & Related papers (2024-04-02T05:34:33Z) - REALM: Robust Entropy Adaptive Loss Minimization for Improved
Single-Sample Test-Time Adaptation [5.749155230209001]
Fully-test-time adaptation (F-TTA) can mitigate performance loss due to distribution shifts between train and test data.
We present a general framework for improving robustness of F-TTA to noisy samples, inspired by self-paced learning and robust loss functions.
arXiv Detail & Related papers (2023-09-07T18:44:58Z) - Functional PCA and Deep Neural Networks-based Bayesian Inverse
Uncertainty Quantification with Transient Experimental Data [1.6328866317851187]
Inverse UQ is the process to inversely quantify the model input uncertainties based on experimental data.
This work focuses on developing an inverse UQ process for time-dependent responses, using dimensionality reduction by functional principal component analysis (PCA) and deep neural network (DNN)-based surrogate models.
arXiv Detail & Related papers (2023-07-10T18:07:17Z) - DELTA: degradation-free fully test-time adaptation [59.74287982885375]
We find that two unfavorable defects are concealed in the prevalent adaptation methodologies like test-time batch normalization (BN) and self-learning.
First, we reveal that the normalization statistics in test-time BN are completely affected by the currently received test samples, resulting in inaccurate estimates.
Second, we show that during test-time adaptation, the parameter update is biased towards some dominant classes.
arXiv Detail & Related papers (2023-01-30T15:54:00Z) - TTAPS: Test-Time Adaption by Aligning Prototypes using Self-Supervision [70.05605071885914]
We propose a novel modification of the self-supervised training algorithm SwAV that adds the ability to adapt to single test samples.
We show the success of our method on the common benchmark dataset CIFAR10-C.
arXiv Detail & Related papers (2022-05-18T05:43:06Z) - Efficient Test-Time Model Adaptation without Forgetting [60.36499845014649]
Test-time adaptation seeks to tackle potential distribution shifts between training and testing data.
We propose an active sample selection criterion to identify reliable and non-redundant samples.
We also introduce a Fisher regularizer to constrain important model parameters from drastic changes.
arXiv Detail & Related papers (2022-04-06T06:39:40Z) - AgFlow: Fast Model Selection of Penalized PCA via Implicit
Regularization Effects of Gradient Flow [64.81110234990888]
Principal component analysis (PCA) has been widely used as an effective technique for feature extraction and dimension reduction.
In the High Dimension Low Sample Size (HDLSS) setting, one may prefer modified principal components, with penalized loadings.
We propose Approximated Gradient Flow (AgFlow) as a fast model selection method for penalized PCA.
arXiv Detail & Related papers (2021-10-07T08:57:46Z) - FAST-PCA: A Fast and Exact Algorithm for Distributed Principal Component
Analysis [12.91948651812873]
Principal Component Analysis (PCA) is a fundamental data preprocessing tool in the world of machine learning.
This paper proposes a distributed PCA algorithm called FAST-PCA (Fast and exAct diSTributed PCA)
arXiv Detail & Related papers (2021-08-27T16:10:59Z) - Unsupervised Domain Adaptation for Speech Recognition via Uncertainty
Driven Self-Training [55.824641135682725]
Domain adaptation experiments using WSJ as a source domain and TED-LIUM 3 as well as SWITCHBOARD show that up to 80% of the performance of a system trained on ground-truth data can be recovered.
arXiv Detail & Related papers (2020-11-26T18:51:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.