EMO-TTA: Improving Test-Time Adaptation of Audio-Language Models for Speech Emotion Recognition
- URL: http://arxiv.org/abs/2509.25495v1
- Date: Mon, 29 Sep 2025 20:52:01 GMT
- Title: EMO-TTA: Improving Test-Time Adaptation of Audio-Language Models for Speech Emotion Recognition
- Authors: Jiacheng Shi, Hongfei Du, Y. Alicia Hong, Ye Gao,
- Abstract summary: Speech emotion recognition (SER) with audio-language models (ALMs) remains vulnerable to distribution shifts at test time.<n>We propose Emo-TTA, a lightweight, training-free adaptation framework that incrementally updates class-conditional statistics.<n>Emo-TTA operates on individual test samples without modifying model weights.
- Score: 3.1649536621597973
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Speech emotion recognition (SER) with audio-language models (ALMs) remains vulnerable to distribution shifts at test time, leading to performance degradation in out-of-domain scenarios. Test-time adaptation (TTA) provides a promising solution but often relies on gradient-based updates or prompt tuning, limiting flexibility and practicality. We propose Emo-TTA, a lightweight, training-free adaptation framework that incrementally updates class-conditional statistics via an Expectation-Maximization procedure for explicit test-time distribution estimation, using ALM predictions as priors. Emo-TTA operates on individual test samples without modifying model weights. Experiments on six out-of-domain SER benchmarks show consistent accuracy improvements over prior TTA baselines, demonstrating the effectiveness of statistical adaptation in aligning model predictions with evolving test distributions.
Related papers
- Free on the Fly: Enhancing Flexibility in Test-Time Adaptation with Online EM [13.924553294859315]
FreeTTA is a training-free and universally available method that makes no assumptions.<n>This study proposes FreeTTA, a training-free and universally available method that makes no assumptions.
arXiv Detail & Related papers (2025-07-09T16:03:07Z) - Realistic Test-Time Adaptation of Vision-Language Models [23.972884634610413]
Vision-Language Models (VLMs) have been widely leveraged to improve predictive performance.<n>Previous works on transductive or test-time adaptation (TTA) often make strong assumptions about the data distribution.<n>Our work challenges these favorable deployment scenarios, and introduces a more realistic evaluation framework.
arXiv Detail & Related papers (2025-01-07T12:17:25Z) - Active Test-Time Adaptation: Theoretical Analyses and An Algorithm [51.84691955495693]
Test-time adaptation (TTA) addresses distribution shifts for streaming test data in unsupervised settings.
We propose the novel problem setting of active test-time adaptation (ATTA) that integrates active learning within the fully TTA setting.
arXiv Detail & Related papers (2024-04-07T22:31:34Z) - Test-Time Model Adaptation with Only Forward Passes [68.11784295706995]
Test-time adaptation has proven effective in adapting a given trained model to unseen test samples with potential distribution shifts.
We propose a test-time Forward-Optimization Adaptation (FOA) method.
FOA runs on quantized 8-bit ViT, outperforms gradient-based TENT on full-precision 32-bit ViT, and achieves an up to 24-fold memory reduction on ImageNet-C.
arXiv Detail & Related papers (2024-04-02T05:34:33Z) - Uncertainty-Calibrated Test-Time Model Adaptation without Forgetting [65.21599711087538]
Test-time adaptation (TTA) seeks to tackle potential distribution shifts between training and test data by adapting a given model w.r.t. any test sample.<n>Prior methods perform backpropagation for each test sample, resulting in unbearable optimization costs to many applications.<n>We propose an Efficient Anti-Forgetting Test-Time Adaptation (EATA) method which develops an active sample selection criterion to identify reliable and non-redundant samples.
arXiv Detail & Related papers (2024-03-18T05:49:45Z) - SGEM: Test-Time Adaptation for Automatic Speech Recognition via
Sequential-Level Generalized Entropy Minimization [30.61075178799518]
A test-time adaptation (TTA) method has recently been proposed to adapt the pre-trained ASR model on unlabeled test instances without source data.
We propose a novel TTA framework, dubbed SGEM, for general ASR models.
SGEM achieves state-of-the-art performance for three mainstream ASR models under various domain shifts.
arXiv Detail & Related papers (2023-06-03T02:27:08Z) - A Comprehensive Survey on Test-Time Adaptation under Distribution Shifts [117.72709110877939]
Test-time adaptation (TTA) has the potential to adapt a pre-trained model to unlabeled data during testing, before making predictions.<n>We categorize TTA into several distinct groups based on the form of test data, namely, test-time domain adaptation, test-time batch adaptation, and online test-time adaptation.
arXiv Detail & Related papers (2023-03-27T16:32:21Z) - DELTA: degradation-free fully test-time adaptation [59.74287982885375]
We find that two unfavorable defects are concealed in the prevalent adaptation methodologies like test-time batch normalization (BN) and self-learning.
First, we reveal that the normalization statistics in test-time BN are completely affected by the currently received test samples, resulting in inaccurate estimates.
Second, we show that during test-time adaptation, the parameter update is biased towards some dominant classes.
arXiv Detail & Related papers (2023-01-30T15:54:00Z) - Listen, Adapt, Better WER: Source-free Single-utterance Test-time
Adaptation for Automatic Speech Recognition [65.84978547406753]
Test-time Adaptation aims to adapt the model trained on source domains to yield better predictions for test samples.
Single-Utterance Test-time Adaptation (SUTA) is the first TTA study in speech area to our best knowledge.
arXiv Detail & Related papers (2022-03-27T06:38:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.