Determining the significance and relative importance of parameters of a
simulated quenching algorithm using statistical tools
- URL: http://arxiv.org/abs/2402.05791v1
- Date: Thu, 8 Feb 2024 16:34:00 GMT
- Title: Determining the significance and relative importance of parameters of a
simulated quenching algorithm using statistical tools
- Authors: Pedro A. Castillo, Maribel Garc\'ia Arenas, Nuria Rico, Antonio Miguel
Mora, Pablo Garc\'ia-S\'anchez, Juan Luis Jim\'enez Laredo, Juan Juli\'an
Merelo Guerv\'os
- Abstract summary: In this paper the ANOVA (ANalysis Of the VAriance) method is used to carry out an exhaustive analysis of a simulated annealing based method.
The significance and relative importance of the parameters regarding the obtained results, as well as suitable values for each of these, were obtained using ANOVA and post-hoc Tukey HSD test.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: When search methods are being designed it is very important to know which
parameters have the greatest influence on the behaviour and performance of the
algorithm. To this end, algorithm parameters are commonly calibrated by means
of either theoretic analysis or intensive experimentation. When undertaking a
detailed statistical analysis of the influence of each parameter, the designer
should pay attention mostly to the parameters that are statistically
significant. In this paper the ANOVA (ANalysis Of the VAriance) method is used
to carry out an exhaustive analysis of a simulated annealing based method and
the different parameters it requires. Following this idea, the significance and
relative importance of the parameters regarding the obtained results, as well
as suitable values for each of these, were obtained using ANOVA and post-hoc
Tukey HSD test, on four well known function optimization problems and the
likelihood function that is used to estimate the parameters involved in the
lognormal diffusion process. Through this statistical study we have verified
the adequacy of parameter values available in the bibliography using parametric
hypothesis tests.
Related papers
- Scaling Exponents Across Parameterizations and Optimizers [94.54718325264218]
We propose a new perspective on parameterization by investigating a key assumption in prior work.
Our empirical investigation includes tens of thousands of models trained with all combinations of threes.
We find that the best learning rate scaling prescription would often have been excluded by the assumptions in prior work.
arXiv Detail & Related papers (2024-07-08T12:32:51Z) - Parameter Estimation in Quantum Metrology Technique for Time Series Prediction [0.0]
The paper investigates the techniques of quantum computation in metrological predictions.
It focuses on enhancing prediction potential through variational parameter estimation.
The impacts of various parameter distributions and learning rates on predictive accuracy are investigated.
arXiv Detail & Related papers (2024-06-12T05:55:45Z) - Towards stable real-world equation discovery with assessing
differentiating quality influence [52.2980614912553]
We propose alternatives to the commonly used finite differences-based method.
We evaluate these methods in terms of applicability to problems, similar to the real ones, and their ability to ensure the convergence of equation discovery algorithms.
arXiv Detail & Related papers (2023-11-09T23:32:06Z) - Exploring the Optimized Value of Each Hyperparameter in Various Gradient
Descent Algorithms [0.0]
gradient descent algorithms have been applied to the parameter optimization of several deep learning models with higher accuracies or lower errors.
This study proposes an analytical framework for analyzing the mean error of each objective function based on various gradient descent algorithms.
The experimental results show that higher efficiency convergences and lower errors can be obtained by the proposed method.
arXiv Detail & Related papers (2022-12-23T12:04:33Z) - On the Effectiveness of Parameter-Efficient Fine-Tuning [79.6302606855302]
Currently, many research works propose to only fine-tune a small portion of the parameters while keeping most of the parameters shared across different tasks.
We show that all of the methods are actually sparse fine-tuned models and conduct a novel theoretical analysis of them.
Despite the effectiveness of sparsity grounded by our theory, it still remains an open problem of how to choose the tunable parameters.
arXiv Detail & Related papers (2022-11-28T17:41:48Z) - Sparse high-dimensional linear regression with a partitioned empirical
Bayes ECM algorithm [62.997667081978825]
We propose a computationally efficient and powerful Bayesian approach for sparse high-dimensional linear regression.
Minimal prior assumptions on the parameters are used through the use of plug-in empirical Bayes estimates.
The proposed approach is implemented in the R package probe.
arXiv Detail & Related papers (2022-09-16T19:15:50Z) - PowerGraph: Using neural networks and principal components to
multivariate statistical power trade-offs [0.0]
A priori statistical power estimation for planned studies with multiple model parameters is inherently a multivariate problem.
Explicit solutions in such cases are either impractical or impossible to solve, leaving researchers with the prevailing method of simulating power.
This paper explores the efficient estimation and graphing of statistical power for a study over varying model parameter combinations.
arXiv Detail & Related papers (2021-12-29T19:06:29Z) - Gaussian Process Regression for Absorption Spectra Analysis of Molecular
Dimers [68.8204255655161]
We discuss an approach based on a machine learning technique, where the parameters for the numerical calculations are chosen from Gaussian Process Regression (GPR)
This approach does not only quickly converge to an optimal parameter set, but in addition provides information about the complete parameter space.
We find that indeed the GPR gives reliable results which are in agreement with direct calculations of these parameters using quantum chemical methods.
arXiv Detail & Related papers (2021-12-14T17:46:45Z) - Leveraging Global Parameters for Flow-based Neural Posterior Estimation [90.21090932619695]
Inferring the parameters of a model based on experimental observations is central to the scientific method.
A particularly challenging setting is when the model is strongly indeterminate, i.e., when distinct sets of parameters yield identical observations.
We present a method for cracking such indeterminacy by exploiting additional information conveyed by an auxiliary set of observations sharing global parameters.
arXiv Detail & Related papers (2021-02-12T12:23:13Z) - Hyperparameter Selection for Subsampling Bootstraps [0.0]
A subsampling method like BLB serves as a powerful tool for assessing the quality of estimators for massive data.
The performance of the subsampling methods are highly influenced by the selection of tuning parameters.
We develop a hyperparameter selection methodology, which can be used to select tuning parameters for subsampling methods.
Both simulation studies and real data analysis demonstrate the superior advantage of our method.
arXiv Detail & Related papers (2020-06-02T17:10:45Z) - Optimal statistical inference in the presence of systematic
uncertainties using neural network optimization based on binned Poisson
likelihoods with nuisance parameters [0.0]
This work presents a novel strategy to construct the dimensionality reduction with neural networks for feature engineering.
We discuss how this approach results in an estimate of the parameters of interest that is close to optimal.
arXiv Detail & Related papers (2020-03-16T13:27:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.