Stability and Accuracy Trade-offs in Statistical Estimation
- URL: http://arxiv.org/abs/2601.11701v1
- Date: Fri, 16 Jan 2026 18:48:03 GMT
- Title: Stability and Accuracy Trade-offs in Statistical Estimation
- Authors: Abhinav Chakraborty, Yuetian Luo, Rina Foygel Barber,
- Abstract summary: We develop optimal stable estimators for four canonical estimation problems.<n>We formalize the intuition that average-case stability imposes a qualitatively weaker restriction than worst-case stability.
- Score: 7.213400161126317
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Algorithmic stability is a central concept in statistics and learning theory that measures how sensitive an algorithm's output is to small changes in the training data. Stability plays a crucial role in understanding generalization, robustness, and replicability, and a variety of stability notions have been proposed in different learning settings. However, while stability entails desirable properties, it is typically not sufficient on its own for statistical learning -- and indeed, it may be at odds with accuracy, since an algorithm that always outputs a constant function is perfectly stable but statistically meaningless. Thus, it is essential to understand the potential statistical cost of stability. In this work, we address this question by adopting a statistical decision-theoretic perspective, treating stability as a constraint in estimation. Focusing on two representative notions-worst-case stability and average-case stability-we first establish general lower bounds on the achievable estimation accuracy under each type of stability constraint. We then develop optimal stable estimators for four canonical estimation problems, including several mean estimation and regression settings. Together, these results characterize the optimal trade-offs between stability and accuracy across these tasks. Our findings formalize the intuition that average-case stability imposes a qualitatively weaker restriction than worst-case stability, and they further reveal that the gap between these two can vary substantially across different estimation problems.
Related papers
- Not All Preferences Are Created Equal: Stability-Aware and Gradient-Efficient Alignment for Reasoning Models [52.48582333951919]
We propose a dynamic framework designed to enhance alignment reliability by maximizing the Signal-to-Noise Ratio of policy updates.<n>SAGE (Stability-Aware Gradient Efficiency) integrates a coarse-grained curriculum mechanism that refreshes candidate pools based on model competence.<n> Experiments on multiple mathematical reasoning benchmarks demonstrate that SAGE significantly accelerates convergence and outperforms static baselines.
arXiv Detail & Related papers (2026-02-01T12:56:10Z) - The Relative Instability of Model Comparison with Cross-validation [65.90853456199493]
Cross-validation can be used to provide a confidence interval for the test error of a stable machine learning algorithm.<n>Relative stability cannot easily be derived from existing stability results, even for simple algorithms.<n>We empirically confirm the invalidity of CV confidence intervals for the test error difference when either soft-thresholding or the Lasso is used.
arXiv Detail & Related papers (2025-08-06T12:54:56Z) - Towards Robust Stability Prediction in Smart Grids: GAN-based Approach under Data Constraints and Adversarial Challenges [53.2306792009435]
This paper introduces a novel framework for detecting instability in smart grids using only stable data.<n>It achieves up to 98.1% accuracy in predicting grid stability and 98.9% in detecting adversarial attacks.<n>Implemented on a single-board computer, it enables real-time decision-making with an average response time of under 7ms.
arXiv Detail & Related papers (2025-01-27T20:48:25Z) - Probabilistic Modeling of Disparity Uncertainty for Robust and Efficient Stereo Matching [61.73532883992135]
We propose a new uncertainty-aware stereo matching framework.<n>We adopt Bayes risk as the measurement of uncertainty and use it to separately estimate data and model uncertainty.
arXiv Detail & Related papers (2024-12-24T23:28:20Z) - On the Selection Stability of Stability Selection and Its Applications [2.263635133348731]
This paper seeks to broaden the use of an established stability estimator to evaluate the overall stability of the stability selection results.<n>We calibrate key stability selection parameters, namely, the decision-making threshold and the expected number of falsely selected variables.<n>The convergence of stability values over successive sub-samples sheds light on the required number of sub-samples.
arXiv Detail & Related papers (2024-11-14T00:02:54Z) - Stability Evaluation via Distributional Perturbation Analysis [28.379994938809133]
We propose a stability evaluation criterion based on distributional perturbations.
Our stability evaluation criterion can address both emphdata corruptions and emphsub-population shifts.
Empirically, we validate the practical utility of our stability evaluation criterion across a host of real-world applications.
arXiv Detail & Related papers (2024-05-06T06:47:14Z) - Stable Update of Regression Trees [0.0]
We focus on the stability of an inherently explainable machine learning method, namely regression trees.
We propose a regularization method, where data points are weighted based on the uncertainty in the initial model.
Results show that the proposed update method improves stability while achieving similar or better predictive performance.
arXiv Detail & Related papers (2024-02-21T09:41:56Z) - The Bayesian Stability Zoo [18.074002943658055]
We show that many definitions of stability found in the learning theory literature are equivalent to one another.
Within each family, we establish equivalences between various definitions, encompassing approximate differential privacy, pure differential privacy, replicability, global stability, perfect generalization, TV stability, mutual information stability, KL-divergence stability, and R'enyi-divergence stability.
This work is a step towards a more systematic taxonomy of stability notions in learning theory, which can promote clarity and an improved understanding of an array of stability concepts that have emerged in recent years.
arXiv Detail & Related papers (2023-10-27T18:59:31Z) - Stability and Generalization of Stochastic Compositional Gradient
Descent Algorithms [61.59448949684493]
We provide the stability and generalization analysis of compositional descent algorithms built from training examples.
We establish the uniform stability results for two popular compositional gradient descent algorithms, namely SCGD and SCSC.
We derive-independent excess risk bounds for SCGD and SCSC by trade-offing their stability results and optimization errors.
arXiv Detail & Related papers (2023-07-07T02:40:09Z) - Minimax Optimal Estimation of Stability Under Distribution Shift [8.893526921869137]
We analyze the stability of a system under distribution shift.
The stability measure is defined in terms of a more intuitive quantity: the level of acceptable performance degradation.
Our characterization of the minimax convergence rate shows that evaluating stability against large performance degradation incurs a statistical cost.
arXiv Detail & Related papers (2022-12-13T02:40:30Z) - Probabilistic robust linear quadratic regulators with Gaussian processes [73.0364959221845]
Probabilistic models such as Gaussian processes (GPs) are powerful tools to learn unknown dynamical systems from data for subsequent use in control design.
We present a novel controller synthesis for linearized GP dynamics that yields robust controllers with respect to a probabilistic stability margin.
arXiv Detail & Related papers (2021-05-17T08:36:18Z) - Fine-Grained Analysis of Stability and Generalization for Stochastic
Gradient Descent [55.85456985750134]
We introduce a new stability measure called on-average model stability, for which we develop novel bounds controlled by the risks of SGD iterates.
This yields generalization bounds depending on the behavior of the best model, and leads to the first-ever-known fast bounds in the low-noise setting.
To our best knowledge, this gives the firstever-known stability and generalization for SGD with even non-differentiable loss functions.
arXiv Detail & Related papers (2020-06-15T06:30:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.