Adaptive Learn-then-Test: Statistically Valid and Efficient Hyperparameter Selection
- URL: http://arxiv.org/abs/2409.15844v2
- Date: Fri, 31 Jan 2025 15:04:08 GMT
- Title: Adaptive Learn-then-Test: Statistically Valid and Efficient Hyperparameter Selection
- Authors: Matteo Zecchin, Sangwoo Park, Osvaldo Simeone,
- Abstract summary: We introduce adaptive learn-then-test (aLTT) to provide finite-sample statistical guarantees on the population risk of AI models.
Unlike the existing learn-then-test (LTT) technique, aLTT implements sequential data-dependent multiple hypothesis testing (MHT) with early termination by leveraging e-processes.
- Score: 36.407171992845456
- License:
- Abstract: We introduce adaptive learn-then-test (aLTT), an efficient hyperparameter selection procedure that provides finite-sample statistical guarantees on the population risk of AI models. Unlike the existing learn-then-test (LTT) technique, which relies on conventional p-value-based multiple hypothesis testing (MHT), aLTT implements sequential data-dependent MHT with early termination by leveraging e-processes. As a result, aLTT can reduce the number of testing rounds, making it particularly well-suited for scenarios in which testing is costly or presents safety risks. Apart from maintaining statistical validity, in applications such as online policy selection for offline reinforcement learning and prompt engineering, aLTT is shown to achieve the same performance as LTT while requiring only a fraction of the testing rounds.
Related papers
- Realistic Test-Time Adaptation of Vision-Language Models [23.972884634610413]
Vision-Language Models (VLMs) have been widely leveraged to improve predictive performance.
Previous works on transductive or test-time adaptation (TTA) often make strong assumptions about the data distribution.
Our work challenges these favorable deployment scenarios, and introduces a more realistic evaluation framework.
arXiv Detail & Related papers (2025-01-07T12:17:25Z) - ETAGE: Enhanced Test Time Adaptation with Integrated Entropy and Gradient Norms for Robust Model Performance [18.055032898349438]
Test time adaptation (TTA) equips deep learning models to handle unseen test data that deviates from the training distribution.
We introduce ETAGE, a refined TTA method that integrates entropy minimization with gradient norms and PLPD.
Our method prioritizes samples that are less likely to cause instability by combining high entropy with high gradient norms out of adaptation.
arXiv Detail & Related papers (2024-09-14T01:25:52Z) - Quantile Learn-Then-Test: Quantile-Based Risk Control for Hyperparameter Optimization [36.14499894307206]
This work introduces a variant of learn-then-test (LTT) that is designed to provide statistical guarantees on quantiles of a risk measure.
We illustrate the practical advantages of this approach by applying the proposed algorithm to a radio access scheduling problem.
arXiv Detail & Related papers (2024-07-24T15:30:12Z) - Active Test-Time Adaptation: Theoretical Analyses and An Algorithm [51.84691955495693]
Test-time adaptation (TTA) addresses distribution shifts for streaming test data in unsupervised settings.
We propose the novel problem setting of active test-time adaptation (ATTA) that integrates active learning within the fully TTA setting.
arXiv Detail & Related papers (2024-04-07T22:31:34Z) - Diverse Data Augmentation with Diffusions for Effective Test-time Prompt
Tuning [73.75282761503581]
We propose DiffTPT, which leverages pre-trained diffusion models to generate diverse and informative new data.
Our experiments on test datasets with distribution shifts and unseen categories demonstrate that DiffTPT improves the zero-shot accuracy by an average of 5.13%.
arXiv Detail & Related papers (2023-08-11T09:36:31Z) - On Pitfalls of Test-Time Adaptation [82.8392232222119]
Test-Time Adaptation (TTA) has emerged as a promising approach for tackling the robustness challenge under distribution shifts.
We present TTAB, a test-time adaptation benchmark that encompasses ten state-of-the-art algorithms, a diverse array of distribution shifts, and two evaluation protocols.
arXiv Detail & Related papers (2023-06-06T09:35:29Z) - A Comprehensive Survey on Test-Time Adaptation under Distribution Shifts [117.72709110877939]
Test-time adaptation (TTA) has the potential to adapt a pre-trained model to unlabeled data during testing, before making predictions.
We categorize TTA into several distinct groups based on the form of test data, namely, test-time domain adaptation, test-time batch adaptation, and online test-time adaptation.
arXiv Detail & Related papers (2023-03-27T16:32:21Z) - Listen, Adapt, Better WER: Source-free Single-utterance Test-time
Adaptation for Automatic Speech Recognition [65.84978547406753]
Test-time Adaptation aims to adapt the model trained on source domains to yield better predictions for test samples.
Single-Utterance Test-time Adaptation (SUTA) is the first TTA study in speech area to our best knowledge.
arXiv Detail & Related papers (2022-03-27T06:38:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.