The Bayesian Stability Zoo
- URL: http://arxiv.org/abs/2310.18428v2
- Date: Tue, 5 Dec 2023 09:50:00 GMT
- Title: The Bayesian Stability Zoo
- Authors: Shay Moran, Hilla Schefler, Jonathan Shafer
- Abstract summary: We show that many definitions of stability found in the learning theory literature are equivalent to one another.
Within each family, we establish equivalences between various definitions, encompassing approximate differential privacy, pure differential privacy, replicability, global stability, perfect generalization, TV stability, mutual information stability, KL-divergence stability, and R'enyi-divergence stability.
This work is a step towards a more systematic taxonomy of stability notions in learning theory, which can promote clarity and an improved understanding of an array of stability concepts that have emerged in recent years.
- Score: 18.074002943658055
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We show that many definitions of stability found in the learning theory
literature are equivalent to one another. We distinguish between two families
of definitions of stability: distribution-dependent and
distribution-independent Bayesian stability. Within each family, we establish
equivalences between various definitions, encompassing approximate differential
privacy, pure differential privacy, replicability, global stability, perfect
generalization, TV stability, mutual information stability, KL-divergence
stability, and R\'enyi-divergence stability. Along the way, we prove boosting
results that enable the amplification of the stability of a learning rule. This
work is a step towards a more systematic taxonomy of stability notions in
learning theory, which can promote clarity and an improved understanding of an
array of stability concepts that have emerged in recent years.
Related papers
- On the Selection Stability of Stability Selection and Its Applications [2.263635133348731]
This paper seeks to broaden the use of an established stability estimator to evaluate the overall stability of the stability selection framework.
We suggest that the stability estimator offers two advantages: it can serve as a reference to reflect the robustness of the outcomes obtained and help identify an optimal regularization value to improve stability.
arXiv Detail & Related papers (2024-11-14T00:02:54Z) - Stable Update of Regression Trees [0.0]
We focus on the stability of an inherently explainable machine learning method, namely regression trees.
We propose a regularization method, where data points are weighted based on the uncertainty in the initial model.
Results show that the proposed update method improves stability while achieving similar or better predictive performance.
arXiv Detail & Related papers (2024-02-21T09:41:56Z) - Evaluating and Improving Continual Learning in Spoken Language
Understanding [58.723320551761525]
We propose an evaluation methodology that provides a unified evaluation on stability, plasticity, and generalizability in continual learning.
By employing the proposed metric, we demonstrate how introducing various knowledge distillations can improve different aspects of these three properties of the SLU model.
arXiv Detail & Related papers (2024-02-16T03:30:27Z) - Stochastic Subgradient Methods with Guaranteed Global Stability in Nonsmooth Nonconvex Optimization [3.0586855806896045]
We first investigate a general framework for subgradient methods, where the corresponding differential inclusion admits a coercive Lyapunov function.
We develop an improved analysis to apply proposed framework to establish the global stability of a wide range of subgradient methods, where the corresponding Lyapunov functions are possibly non-coercive.
arXiv Detail & Related papers (2023-07-19T15:26:18Z) - Stability and Generalization of Stochastic Compositional Gradient
Descent Algorithms [61.59448949684493]
We provide the stability and generalization analysis of compositional descent algorithms built from training examples.
We establish the uniform stability results for two popular compositional gradient descent algorithms, namely SCGD and SCSC.
We derive-independent excess risk bounds for SCGD and SCSC by trade-offing their stability results and optimization errors.
arXiv Detail & Related papers (2023-07-07T02:40:09Z) - Minimax Optimal Estimation of Stability Under Distribution Shift [8.893526921869137]
We analyze the stability of a system under distribution shift.
The stability measure is defined in terms of a more intuitive quantity: the level of acceptable performance degradation.
Our characterization of the minimax convergence rate shows that evaluating stability against large performance degradation incurs a statistical cost.
arXiv Detail & Related papers (2022-12-13T02:40:30Z) - KCRL: Krasovskii-Constrained Reinforcement Learning with Guaranteed
Stability in Nonlinear Dynamical Systems [66.9461097311667]
We propose a model-based reinforcement learning framework with formal stability guarantees.
The proposed method learns the system dynamics up to a confidence interval using feature representation.
We show that KCRL is guaranteed to learn a stabilizing policy in a finite number of interactions with the underlying unknown system.
arXiv Detail & Related papers (2022-06-03T17:27:04Z) - Continual evaluation for lifelong learning: Identifying the stability
gap [35.99653845083381]
We show that a set of common state-of-the-art methods still suffers from substantial forgetting upon starting to learn new tasks.
We refer to this intriguing but potentially problematic phenomenon as the stability gap.
We establish a framework for continual evaluation that uses per-iteration evaluation and we define a new set of metrics to quantify worst-case performance.
arXiv Detail & Related papers (2022-05-26T15:56:08Z) - Stability and Identification of Random Asynchronous Linear
Time-Invariant Systems [81.02274958043883]
We show the additional benefits of randomization and asynchrony on the stability of linear dynamical systems.
For unknown randomized LTI systems, we propose a systematic identification method to recover the underlying dynamics.
arXiv Detail & Related papers (2020-12-08T02:00:04Z) - Efficient Empowerment Estimation for Unsupervised Stabilization [75.32013242448151]
empowerment principle enables unsupervised stabilization of dynamical systems at upright positions.
We propose an alternative solution based on a trainable representation of a dynamical system as a Gaussian channel.
We show that our method has a lower sample complexity, is more stable in training, possesses the essential properties of the empowerment function, and allows estimation of empowerment from images.
arXiv Detail & Related papers (2020-07-14T21:10:16Z) - Fine-Grained Analysis of Stability and Generalization for Stochastic
Gradient Descent [55.85456985750134]
We introduce a new stability measure called on-average model stability, for which we develop novel bounds controlled by the risks of SGD iterates.
This yields generalization bounds depending on the behavior of the best model, and leads to the first-ever-known fast bounds in the low-noise setting.
To our best knowledge, this gives the firstever-known stability and generalization for SGD with even non-differentiable loss functions.
arXiv Detail & Related papers (2020-06-15T06:30:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.