On the Conditions for Domain Stability for Machine Learning: a Mathematical Approach
- URL: http://arxiv.org/abs/2412.00464v1
- Date: Sat, 30 Nov 2024 12:57:07 GMT
- Title: On the Conditions for Domain Stability for Machine Learning: a Mathematical Approach
- Authors: Gabriel Pedroza,
- Abstract summary: This work proposes a mathematical approach that (re)defines a property of Machine Learning models named stability.
The characteristics in scope depend upon the domain of the function, what allows us to adopt topological and metric spaces theory as a basis.
- Score: 0.9790236766474201
- License:
- Abstract: This work proposes a mathematical approach that (re)defines a property of Machine Learning models named stability and determines sufficient conditions to validate it. Machine Learning models are represented as functions, and the characteristics in scope depend upon the domain of the function, what allows us to adopt topological and metric spaces theory as a basis. Finally, this work provides some equivalences useful to prove and test stability in Machine Learning models. The results suggest that whenever stability is aligned with the notion of function smoothness, then the stability of Machine Learning models primarily depends upon certain topological, measurable properties of the classification sets within the ML model domain.
Related papers
- Latent Semantic Consensus For Deterministic Geometric Model Fitting [109.44565542031384]
We propose an effective method called Latent Semantic Consensus (LSC)
LSC formulates the model fitting problem into two latent semantic spaces based on data points and model hypotheses.
LSC is able to provide consistent and reliable solutions within only a few milliseconds for general multi-structural model fitting.
arXiv Detail & Related papers (2024-03-11T05:35:38Z) - Continuous Management of Machine Learning-Based Application Behavior [3.316045828362788]
Non-functional properties of Machine Learning models must be monitored, verified, and maintained.
We propose a multi-model approach that aims to guarantee a stable non-functional behavior of ML-based applications.
We experimentally evaluate our solution in a real-world scenario focusing on non-functional property fairness.
arXiv Detail & Related papers (2023-11-21T15:47:06Z) - Guaranteed Stable Quadratic Models and their applications in SINDy and
Operator Inference [9.599029891108229]
We focus on an operator inference methodology that builds dynamical models.
For inference, we aim to learn the operators of a model by setting up an appropriate optimization problem.
We present several numerical examples, illustrating the preservation of stability and discussing its comparison with the existing state-of-the-art approach to infer operators.
arXiv Detail & Related papers (2023-08-26T09:00:31Z) - Stability Guarantees for Feature Attributions with Multiplicative
Smoothing [11.675168649032875]
We analyze stability as a property for reliable feature attribution methods.
We develop a smoothing method called Multiplicative Smoothing (MuS) to achieve such a model.
We evaluate MuS on vision and language models with various feature attribution methods, such as LIME and SHAP, and demonstrate that MuS endows feature attributions with non-trivial stability guarantees.
arXiv Detail & Related papers (2023-07-12T04:19:47Z) - On the Stability-Plasticity Dilemma of Class-Incremental Learning [50.863180812727244]
A primary goal of class-incremental learning is to strike a balance between stability and plasticity.
This paper aims to shed light on how effectively recent class-incremental learning algorithms address the stability-plasticity trade-off.
arXiv Detail & Related papers (2023-04-04T09:34:14Z) - Numerically Stable Sparse Gaussian Processes via Minimum Separation
using Cover Trees [57.67528738886731]
We study the numerical stability of scalable sparse approximations based on inducing points.
For low-dimensional tasks such as geospatial modeling, we propose an automated method for computing inducing points satisfying these conditions.
arXiv Detail & Related papers (2022-10-14T15:20:17Z) - Ensembling improves stability and power of feature selection for deep
learning models [11.973624420202388]
In this paper, we show that inherentity in the design and training of deep learning models makes commonly used feature importance scores unstable.
We explore the ensembling of feature importance scores of models across different epochs and find that this simple approach can substantially address this issue.
We present a framework to combine the feature importance of trained models and instead of selecting features from one best model, we perform an ensemble of feature importance scores from numerous good models.
arXiv Detail & Related papers (2022-10-02T19:07:53Z) - Learning continuous models for continuous physics [94.42705784823997]
We develop a test based on numerical analysis theory to validate machine learning models for science and engineering applications.
Our results illustrate how principled numerical analysis methods can be coupled with existing ML training/testing methodologies to validate models for science and engineering applications.
arXiv Detail & Related papers (2022-02-17T07:56:46Z) - Model-agnostic multi-objective approach for the evolutionary discovery
of mathematical models [55.41644538483948]
In modern data science, it is more interesting to understand the properties of the model, which parts could be replaced to obtain better results.
We use multi-objective evolutionary optimization for composite data-driven model learning to obtain the algorithm's desired properties.
arXiv Detail & Related papers (2021-07-07T11:17:09Z) - Non-parametric Models for Non-negative Functions [48.7576911714538]
We provide the first model for non-negative functions from the same good linear models.
We prove that it admits a representer theorem and provide an efficient dual formulation for convex problems.
arXiv Detail & Related papers (2020-07-08T07:17:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.