Optimization-Free Test-Time Adaptation for Cross-Person Activity
Recognition
- URL: http://arxiv.org/abs/2310.18562v2
- Date: Wed, 7 Feb 2024 08:47:47 GMT
- Title: Optimization-Free Test-Time Adaptation for Cross-Person Activity
Recognition
- Authors: Shuoyuan Wang, Jindong Wang, HuaJun Xi, Bob Zhang, Lei Zhang, Hongxin
Wei
- Abstract summary: Test-Time Adaptation aims to utilize the test stream to adjust predictions in real-time inference.
High computational cost makes it intractable to run on resource-constrained edge devices.
We propose an Optimization-Free Test-Time Adaptation framework for sensor-based HAR.
- Score: 30.350005654271868
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Human Activity Recognition (HAR) models often suffer from performance
degradation in real-world applications due to distribution shifts in activity
patterns across individuals. Test-Time Adaptation (TTA) is an emerging learning
paradigm that aims to utilize the test stream to adjust predictions in
real-time inference, which has not been explored in HAR before. However, the
high computational cost of optimization-based TTA algorithms makes it
intractable to run on resource-constrained edge devices. In this paper, we
propose an Optimization-Free Test-Time Adaptation (OFTTA) framework for
sensor-based HAR. OFTTA adjusts the feature extractor and linear classifier
simultaneously in an optimization-free manner. For the feature extractor, we
propose Exponential DecayTest-time Normalization (EDTN) to replace the
conventional batch normalization (CBN) layers. EDTN combines CBN and Test-time
batch Normalization (TBN) to extract reliable features against domain shifts
with TBN's influence decreasing exponentially in deeper layers. For the
classifier, we adjust the prediction by computing the distance between the
feature and the prototype, which is calculated by a maintained support set. In
addition, the update of the support set is based on the pseudo label, which can
benefit from reliable features extracted by EDTN. Extensive experiments on
three public cross-person HAR datasets and two different TTA settings
demonstrate that OFTTA outperforms the state-of-the-art TTA approaches in both
classification performance and computational efficiency. Finally, we verify the
superiority of our proposed OFTTA on edge devices, indicating possible
deployment in real applications. Our code is available at
https://github.com/Claydon-Wang/OFTTA.
Related papers
- Test-Time Low Rank Adaptation via Confidence Maximization for Zero-Shot Generalization of Vision-Language Models [4.655740975414312]
This paper introduces Test-Time Low-rank adaptation (TTL) as an alternative to prompt tuning for zero-shot generalizations of large-scale vision-language models (VLMs)
TTL offers a test-time-efficient adaptation approach that updates the attention weights of the transformer by maximizing prediction confidence.
arXiv Detail & Related papers (2024-07-22T17:59:19Z) - Active Test-Time Adaptation: Theoretical Analyses and An Algorithm [51.84691955495693]
Test-time adaptation (TTA) addresses distribution shifts for streaming test data in unsupervised settings.
We propose the novel problem setting of active test-time adaptation (ATTA) that integrates active learning within the fully TTA setting.
arXiv Detail & Related papers (2024-04-07T22:31:34Z) - Diverse Data Augmentation with Diffusions for Effective Test-time Prompt
Tuning [73.75282761503581]
We propose DiffTPT, which leverages pre-trained diffusion models to generate diverse and informative new data.
Our experiments on test datasets with distribution shifts and unseen categories demonstrate that DiffTPT improves the zero-shot accuracy by an average of 5.13%.
arXiv Detail & Related papers (2023-08-11T09:36:31Z) - Active Finetuning: Exploiting Annotation Budget in the
Pretraining-Finetuning Paradigm [132.9949120482274]
This paper focuses on the selection of samples for annotation in the pretraining-finetuning paradigm.
We propose a novel method called ActiveFT for active finetuning task to select a subset of data distributing similarly with the entire unlabeled pool.
Extensive experiments show the leading performance and high efficiency of ActiveFT superior to baselines on both image classification and semantic segmentation.
arXiv Detail & Related papers (2023-03-25T07:17:03Z) - TTN: A Domain-Shift Aware Batch Normalization in Test-Time Adaptation [28.63285970880039]
Recent test-time adaptation methods heavily rely on transductive batch normalization (TBN)
Adopting TBN that employs test batch statistics mitigates the performance degradation caused by the domain shift.
We present a new test-time normalization (TTN) method that interpolates the statistics by adjusting the importance between CBN and TBN according to the domain-shift sensitivity of each BN layer.
arXiv Detail & Related papers (2023-02-10T10:25:29Z) - Robust Continual Test-time Adaptation: Instance-aware BN and
Prediction-balanced Memory [58.72445309519892]
We present a new test-time adaptation scheme that is robust against non-i.i.d. test data streams.
Our novelty is mainly two-fold: (a) Instance-Aware Batch Normalization (IABN) that corrects normalization for out-of-distribution samples, and (b) Prediction-balanced Reservoir Sampling (PBRS) that simulates i.i.d. data stream from non-i.i.d. stream in a class-balanced manner.
arXiv Detail & Related papers (2022-08-10T03:05:46Z) - Sample-Efficient Optimisation with Probabilistic Transformer Surrogates [66.98962321504085]
This paper investigates the feasibility of employing state-of-the-art probabilistic transformers in Bayesian optimisation.
We observe two drawbacks stemming from their training procedure and loss definition, hindering their direct deployment as proxies in black-box optimisation.
We introduce two components: 1) a BO-tailored training prior supporting non-uniformly distributed points, and 2) a novel approximate posterior regulariser trading-off accuracy and input sensitivity to filter favourable stationary points for improved predictive performance.
arXiv Detail & Related papers (2022-05-27T11:13:17Z) - Test-time Batch Normalization [61.292862024903584]
Deep neural networks often suffer the data distribution shift between training and testing.
We revisit the batch normalization (BN) in the training process and reveal two key insights benefiting test-time optimization.
We propose a novel test-time BN layer design, GpreBN, which is optimized during testing by minimizing Entropy loss.
arXiv Detail & Related papers (2022-05-20T14:33:39Z) - Adaptive Test-Time Augmentation for Low-Power CPU [3.5473686344971416]
Test-Time Augmentation (TTA) techniques aim to alleviate such common side effect at inference-time.
We propose AdapTTA, an adaptive implementation of TTA that controls the number of feed-forward passes dynamically.
Experimental results on state-of-the-art ConvNets for image classification deployed on a commercial ARM Cortex-A CPU demonstrate AdapTTA reaches remarkable latency savings.
arXiv Detail & Related papers (2021-05-13T10:50:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.