Applying Evolutionary Metaheuristics for Parameter Estimation of
Individual-Based Models
- URL: http://arxiv.org/abs/2005.12841v1
- Date: Sun, 24 May 2020 07:48:27 GMT
- Title: Applying Evolutionary Metaheuristics for Parameter Estimation of
Individual-Based Models
- Authors: Antonio Prestes Garc\'ia and Alfonso Rodr\'iguez-Pat\'on
- Abstract summary: We introduce EvoPER, an R package for the simplifying the parameter estimation using evolutionary methods.
In this work, we introduce EvoPER, an R package for the simplifying the parameter estimation using evolutionary methods.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Individual-based models are complex and they have usually an elevated number
of input parameters which must be tuned for reproducing the observed population
data or the experimental results as accurately as possible. Thus, one of the
weakest points of this modelling approach lies on the fact that rarely the
modeler has the enough information about the correct values or even the
acceptable range for the input parameters. Consequently, several parameter
combinations must be tried to find an acceptable set of input factors
minimizing the deviations of simulated and the reference dataset. In practice,
most of times, it is computationally unfeasible to traverse the complete search
space trying all every possible combination to find the best of set of
parameters. That is precisely an instance of a combinatorial problem which is
suitable for being solved by metaheuristics and evolutionary computation
techniques. In this work, we introduce EvoPER, an R package for simplifying the
parameter estimation using evolutionary computation methods.
Related papers
- Sensitivity analysis using the Metamodel of Optimal Prognosis [0.0]
In real case applications within the virtual prototyping process, it is not always possible to reduce the complexity of the physical models.
We present an automatic approach for the selection of the optimal suitable meta-model for the actual problem.
arXiv Detail & Related papers (2024-08-07T07:09:06Z) - Scaling Exponents Across Parameterizations and Optimizers [94.54718325264218]
We propose a new perspective on parameterization by investigating a key assumption in prior work.
Our empirical investigation includes tens of thousands of models trained with all combinations of threes.
We find that the best learning rate scaling prescription would often have been excluded by the assumptions in prior work.
arXiv Detail & Related papers (2024-07-08T12:32:51Z) - Diffusion posterior sampling for simulation-based inference in tall data settings [53.17563688225137]
Simulation-based inference ( SBI) is capable of approximating the posterior distribution that relates input parameters to a given observation.
In this work, we consider a tall data extension in which multiple observations are available to better infer the parameters of the model.
We compare our method to recently proposed competing approaches on various numerical experiments and demonstrate its superiority in terms of numerical stability and computational cost.
arXiv Detail & Related papers (2024-04-11T09:23:36Z) - Should We Learn Most Likely Functions or Parameters? [51.133793272222874]
We investigate the benefits and drawbacks of directly estimating the most likely function implied by the model and the data.
We find that function-space MAP estimation can lead to flatter minima, better generalization, and improved to overfitting.
arXiv Detail & Related papers (2023-11-27T16:39:55Z) - Adaptive Sparse Gaussian Process [0.0]
We propose the first adaptive sparse Gaussian Process (GP) able to address all these issues.
We first reformulate a variational sparse GP algorithm to make it adaptive through a forgetting factor.
We then propose updating a single inducing point of the sparse GP model together with the remaining model parameters every time a new sample arrives.
arXiv Detail & Related papers (2023-02-20T21:34:36Z) - On the Effectiveness of Parameter-Efficient Fine-Tuning [79.6302606855302]
Currently, many research works propose to only fine-tune a small portion of the parameters while keeping most of the parameters shared across different tasks.
We show that all of the methods are actually sparse fine-tuned models and conduct a novel theoretical analysis of them.
Despite the effectiveness of sparsity grounded by our theory, it still remains an open problem of how to choose the tunable parameters.
arXiv Detail & Related papers (2022-11-28T17:41:48Z) - Sparse high-dimensional linear regression with a partitioned empirical
Bayes ECM algorithm [62.997667081978825]
We propose a computationally efficient and powerful Bayesian approach for sparse high-dimensional linear regression.
Minimal prior assumptions on the parameters are used through the use of plug-in empirical Bayes estimates.
The proposed approach is implemented in the R package probe.
arXiv Detail & Related papers (2022-09-16T19:15:50Z) - HyperImpute: Generalized Iterative Imputation with Automatic Model
Selection [77.86861638371926]
We propose a generalized iterative imputation framework for adaptively and automatically configuring column-wise models.
We provide a concrete implementation with out-of-the-box learners, simulators, and interfaces.
arXiv Detail & Related papers (2022-06-15T19:10:35Z) - Sparse Bayesian Learning for Complex-Valued Rational Approximations [0.03392423750246091]
Surrogate models are used to alleviate the computational burden in engineering tasks.
These models show a strongly non-linear dependence on their input parameters.
We apply a sparse learning approach to the rational approximation.
arXiv Detail & Related papers (2022-06-06T12:06:13Z) - Optimizing model-agnostic Random Subspace ensembles [5.680512932725364]
We present a model-agnostic ensemble approach for supervised learning.
The proposed approach alternates between learning an ensemble of models using a parametric version of the Random Subspace approach.
We show the good performance of the proposed approach, both in terms of prediction and feature ranking, on simulated and real-world datasets.
arXiv Detail & Related papers (2021-09-07T13:58:23Z) - Convex Latent Effect Logit Model via Sparse and Low-rank Decomposition [2.1915057426589746]
We propose a convexparametric convexparametric formulation for learning logistic regression model (logit) with latent heterogeneous effect on sub-population.
Despite its popularity, the mixed logit approach for learning individual heterogeneity has several downsides.
arXiv Detail & Related papers (2021-08-22T22:23:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.