Overtuning in Hyperparameter Optimization
- URL: http://arxiv.org/abs/2506.19540v1
- Date: Tue, 24 Jun 2025 11:49:48 GMT
- Title: Overtuning in Hyperparameter Optimization
- Authors: Lennart Schneider, Bernd Bischl, Matthias Feurer,
- Abstract summary: We provide a formal definition of overtuning and distinguish it from related concepts such as meta-overfitting.<n>We conduct a large-scale reanalysis of HPO benchmark data to assess the prevalence and severity of overtuning.<n>Our results show that overtuning is more common than previously assumed, typically mild but occasionally severe.
- Score: 11.91482877988017
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Hyperparameter optimization (HPO) aims to identify an optimal hyperparameter configuration (HPC) such that the resulting model generalizes well to unseen data. As the expected generalization error cannot be optimized directly, it is estimated with a resampling strategy, such as holdout or cross-validation. This approach implicitly assumes that minimizing the validation error leads to improved generalization. However, since validation error estimates are inherently stochastic and depend on the resampling strategy, a natural question arises: Can excessive optimization of the validation error lead to overfitting at the HPO level, akin to overfitting in model training based on empirical risk minimization? In this paper, we investigate this phenomenon, which we term overtuning, a form of overfitting specific to HPO. Despite its practical relevance, overtuning has received limited attention in the HPO and AutoML literature. We provide a formal definition of overtuning and distinguish it from related concepts such as meta-overfitting. We then conduct a large-scale reanalysis of HPO benchmark data to assess the prevalence and severity of overtuning. Our results show that overtuning is more common than previously assumed, typically mild but occasionally severe. In approximately 10% of cases, overtuning leads to the selection of a seemingly optimal HPC with worse generalization error than the default or first configuration tried. We further analyze how factors such as performance metric, resampling strategy, dataset size, learning algorithm, and HPO method affect overtuning and discuss mitigation strategies. Our results highlight the need to raise awareness of overtuning, particularly in the small-data regime, indicating that further mitigation strategies should be studied.
Related papers
- Grouped Sequential Optimization Strategy -- the Application of Hyperparameter Importance Assessment in Deep Learning [1.7778609937758323]
We implement a novel HPO strategy called 'Sequential Grouping'<n>Our experiments, validated across six additional image classification datasets, demonstrate that incorporating hyper parameter importance assessment (HIA) can significantly accelerate HPO without compromising model performance.
arXiv Detail & Related papers (2025-03-07T03:01:00Z) - Exploring Variability in Fine-Tuned Models for Text Classification with DistilBERT [0.9249657468385781]
This study evaluates fine-tuning strategies for text classification using the DistilBERT model.<n>We examine the influence of hyper parameters such as learning rate, batch size, and epochs on accuracy, F1-score, and loss.
arXiv Detail & Related papers (2024-12-31T03:16:15Z) - Uncertainty-Penalized Direct Preference Optimization [52.387088396044206]
We develop a pessimistic framework for DPO by introducing preference uncertainty penalization schemes.
The penalization serves as a correction to the loss which attenuates the loss gradient for uncertain samples.
We show improved overall performance compared to vanilla DPO, as well as better completions on prompts from high-uncertainty chosen/rejected responses.
arXiv Detail & Related papers (2024-10-26T14:24:37Z) - Low-rank finetuning for LLMs: A fairness perspective [54.13240282850982]
Low-rank approximation techniques have become the de facto standard for fine-tuning Large Language Models.
This paper investigates the effectiveness of these methods in capturing the shift of fine-tuning datasets from the initial pre-trained data distribution.
We show that low-rank fine-tuning inadvertently preserves undesirable biases and toxic behaviors.
arXiv Detail & Related papers (2024-05-28T20:43:53Z) - Provably Mitigating Overoptimization in RLHF: Your SFT Loss is Implicitly an Adversarial Regularizer [52.09480867526656]
We identify the source of misalignment as a form of distributional shift and uncertainty in learning human preferences.<n>To mitigate overoptimization, we first propose a theoretical algorithm that chooses the best policy for an adversarially chosen reward model.<n>Using the equivalence between reward models and the corresponding optimal policy, the algorithm features a simple objective that combines a preference optimization loss and a supervised learning loss.
arXiv Detail & Related papers (2024-05-26T05:38:50Z) - Hyperparameter Optimization Can Even be Harmful in Off-Policy Learning and How to Deal with It [20.312864152544954]
We show that naively applying an unbiased estimator of the generalization performance as a surrogate objective in HPO can cause an unexpected failure.
We propose simple and computationally efficient corrections to the typical HPO procedure to deal with the aforementioned issues simultaneously.
arXiv Detail & Related papers (2024-04-23T14:34:16Z) - A PAC-Bayesian Perspective on the Interpolating Information Criterion [54.548058449535155]
We show how a PAC-Bayes bound is obtained for a general class of models, characterizing factors which influence performance in the interpolating regime.
We quantify how the test error for overparameterized models achieving effectively zero training error depends on the quality of the implicit regularization imposed by e.g. the combination of model, parameter-initialization scheme.
arXiv Detail & Related papers (2023-11-13T01:48:08Z) - Unified Low-Resource Sequence Labeling by Sample-Aware Dynamic Sparse
Finetuning [24.765911297156855]
FISH-DIP is a sample-aware dynamic sparse finetuning strategy that selectively focuses on a fraction of parameters.
We demonstrate that FISH-DIP can smoothly optimize the model in low resource settings offering upto 40% performance improvements.
arXiv Detail & Related papers (2023-11-07T06:19:37Z) - Model-Based Reparameterization Policy Gradient Methods: Theory and
Practical Algorithms [88.74308282658133]
Reization (RP) Policy Gradient Methods (PGMs) have been widely adopted for continuous control tasks in robotics and computer graphics.
Recent studies have revealed that, when applied to long-term reinforcement learning problems, model-based RP PGMs may experience chaotic and non-smooth optimization landscapes.
We propose a spectral normalization method to mitigate the exploding variance issue caused by long model unrolls.
arXiv Detail & Related papers (2023-10-30T18:43:21Z) - Fine-Tuning Language Models with Advantage-Induced Policy Alignment [80.96507425217472]
We propose a novel algorithm for aligning large language models to human preferences.
We show that it consistently outperforms PPO in language tasks by a large margin.
We also provide a theoretical justification supporting the design of our loss function.
arXiv Detail & Related papers (2023-06-04T01:59:40Z) - Estimate-Then-Optimize versus Integrated-Estimation-Optimization versus Sample Average Approximation: A Stochastic Dominance Perspective [21.945745750737952]
We show that a reverse behavior appears when the model class is well-specified and there is sufficient data.<n>We also demonstrate how standard sample average approximation (SAA) performs the worst when the model class is well-specified in terms of regret.
arXiv Detail & Related papers (2023-04-13T21:54:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.