Un-Mixing Test-Time Normalization Statistics: Combatting Label Temporal Correlation
- URL: http://arxiv.org/abs/2401.08328v2
- Date: Thu, 14 Mar 2024 11:20:21 GMT
- Title: Un-Mixing Test-Time Normalization Statistics: Combatting Label Temporal Correlation
- Authors: Devavrat Tomar, Guillaume Vray, Jean-Philippe Thiran, Behzad Bozorgtabar,
- Abstract summary: This paper presents a novel method termed 'Un-Mixing Test-Time Normalization Statistics' (UnMix-TNS)
Our method re-calibrates the statistics for each instance within a test batch by mixing it with multiple distinct statistics components.
Our results highlight UnMix-TNS's capacity to markedly enhance stability and performance across various benchmarks.
- Score: 11.743315123714108
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Recent test-time adaptation methods heavily rely on nuanced adjustments of batch normalization (BN) parameters. However, one critical assumption often goes overlooked: that of independently and identically distributed (i.i.d.) test batches with respect to unknown labels. This oversight leads to skewed BN statistics and undermines the reliability of the model under non-i.i.d. scenarios. To tackle this challenge, this paper presents a novel method termed 'Un-Mixing Test-Time Normalization Statistics' (UnMix-TNS). Our method re-calibrates the statistics for each instance within a test batch by mixing it with multiple distinct statistics components, thus inherently simulating the i.i.d. scenario. The core of this method hinges on a distinctive online unmixing procedure that continuously updates these statistics components by incorporating the most similar instances from new test batches. Remarkably generic in its design, UnMix-TNS seamlessly integrates with a wide range of leading test-time adaptation methods and pre-trained architectures equipped with BN layers. Empirical evaluations corroborate the robustness of UnMix-TNS under varied scenarios-ranging from single to continual and mixed domain shifts, particularly excelling with temporally correlated test data and corrupted non-i.i.d. real-world streams. This adaptability is maintained even with very small batch sizes or single instances. Our results highlight UnMix-TNS's capacity to markedly enhance stability and performance across various benchmarks. Our code is publicly available at https://github.com/devavratTomar/unmixtns.
Related papers
- STAMP: Outlier-Aware Test-Time Adaptation with Stable Memory Replay [76.06127233986663]
Test-time adaptation (TTA) aims to address the distribution shift between the training and test data with only unlabeled data at test time.
This paper pays attention to the problem that conducts both sample recognition and outlier rejection during inference while outliers exist.
We propose a new approach called STAble Memory rePlay (STAMP), which performs optimization over a stable memory bank instead of the risky mini-batch.
arXiv Detail & Related papers (2024-07-22T16:25:41Z) - Discover Your Neighbors: Advanced Stable Test-Time Adaptation in Dynamic World [8.332531696256666]
Discover Your Neighbours (DYN) is the first backward-free approach specialized for dynamic test-time adaptation (TTA)
Our DYN consists of layer-wise instance statistics clustering (LISC) and cluster-aware batch normalization (CABN)
Experimental results validate DYN's robustness and effectiveness, demonstrating maintained performance under dynamic data stream patterns.
arXiv Detail & Related papers (2024-06-08T09:22:32Z) - MedBN: Robust Test-Time Adaptation against Malicious Test Samples [11.397666167665484]
Test-time adaptation (TTA) has emerged as a promising solution to address performance decay due to unforeseen distribution shifts between training and test data.
Previous studies have uncovered security vulnerabilities within TTA even when a small proportion of the test batch is maliciously manipulated.
We propose median batch normalization (MedBN), leveraging the robustness of the median for statistics estimation within the batch normalization layer during test-time inference.
arXiv Detail & Related papers (2024-03-28T11:33:02Z) - Exact Consistency Tests for Gaussian Mixture Filters using Normalized Deviation Squared Statistics [3.3748750222488657]
This paper derives a new exact result for consistency testing within the framework of normalized deviation squared (NDS) statistics.
The accuracy and utility of the resulting consistency tests are numerically demonstrated on static and dynamic mixture estimation examples.
arXiv Detail & Related papers (2023-12-29T01:28:40Z) - Sequential Kernelized Independence Testing [101.22966794822084]
We design sequential kernelized independence tests inspired by kernelized dependence measures.
We demonstrate the power of our approaches on both simulated and real data.
arXiv Detail & Related papers (2022-12-14T18:08:42Z) - C-Mixup: Improving Generalization in Regression [71.10418219781575]
Mixup algorithm improves generalization by linearly interpolating a pair of examples and their corresponding labels.
We propose C-Mixup, which adjusts the sampling probability based on the similarity of the labels.
C-Mixup achieves 6.56%, 4.76%, 5.82% improvements in in-distribution generalization, task generalization, and out-of-distribution robustness, respectively.
arXiv Detail & Related papers (2022-10-11T20:39:38Z) - Robust Continual Test-time Adaptation: Instance-aware BN and
Prediction-balanced Memory [58.72445309519892]
We present a new test-time adaptation scheme that is robust against non-i.i.d. test data streams.
Our novelty is mainly two-fold: (a) Instance-Aware Batch Normalization (IABN) that corrects normalization for out-of-distribution samples, and (b) Prediction-balanced Reservoir Sampling (PBRS) that simulates i.i.d. data stream from non-i.i.d. stream in a class-balanced manner.
arXiv Detail & Related papers (2022-08-10T03:05:46Z) - MixNorm: Test-Time Adaptation Through Online Normalization Estimation [35.65295482033232]
We present a simple and effective way to estimate the batch-norm statistics during test time, to fast adapt a source model to target test samples.
Known as Test-Time Adaptation, most prior works studying this task follow two assumptions in their evaluation where (1) test samples come together as a large batch, and (2) all from a single test distribution.
arXiv Detail & Related papers (2021-10-21T21:04:42Z) - Test-time Batch Statistics Calibration for Covariate Shift [66.7044675981449]
We propose to adapt the deep models to the novel environment during inference.
We present a general formulation $alpha$-BN to calibrate the batch statistics.
We also present a novel loss function to form a unified test time adaptation framework Core.
arXiv Detail & Related papers (2021-10-06T08:45:03Z) - Double Forward Propagation for Memorized Batch Normalization [68.34268180871416]
Batch Normalization (BN) has been a standard component in designing deep neural networks (DNNs)
We propose a memorized batch normalization (MBN) which considers multiple recent batches to obtain more accurate and robust statistics.
Compared to related methods, the proposed MBN exhibits consistent behaviors in both training and inference.
arXiv Detail & Related papers (2020-10-10T08:48:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.