A Metaheuristic for Amortized Search in High-Dimensional Parameter
Spaces
- URL: http://arxiv.org/abs/2309.16465v1
- Date: Thu, 28 Sep 2023 14:25:14 GMT
- Title: A Metaheuristic for Amortized Search in High-Dimensional Parameter
Spaces
- Authors: Dominic Boutet and Sylvain Baillet (Montreal Neurological Institute,
McGill University, Montreal QC, Canada)
- Abstract summary: We propose a new metaheuristic that drives dimensionality reductions from feature-informed transformations.
DR-FFIT implements an efficient sampling strategy that facilitates a gradient-free parameter search in high-dimensional spaces.
Our test data show that DR-FFIT boosts the performances of random-search and simulated-annealing against well-established metaheuristics.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Parameter inference for dynamical models of (bio)physical systems remains a
challenging problem. Intractable gradients, high-dimensional spaces, and
non-linear model functions are typically problematic without large
computational budgets. A recent body of work in that area has focused on
Bayesian inference methods, which consider parameters under their statistical
distributions and therefore, do not derive point estimates of optimal parameter
values. Here we propose a new metaheuristic that drives dimensionality
reductions from feature-informed transformations (DR-FFIT) to address these
bottlenecks. DR-FFIT implements an efficient sampling strategy that facilitates
a gradient-free parameter search in high-dimensional spaces. We use artificial
neural networks to obtain differentiable proxies for the model's features of
interest. The resulting gradients enable the estimation of a local active
subspace of the model within a defined sampling region. This approach enables
efficient dimensionality reductions of highly non-linear search spaces at a low
computational cost. Our test data show that DR-FFIT boosts the performances of
random-search and simulated-annealing against well-established metaheuristics,
and improves the goodness-of-fit of the model, all within contained run-time
costs.
Related papers
- Provably Efficient Algorithm for Nonstationary Low-Rank MDPs [48.92657638730582]
We make the first effort to investigate nonstationary RL under episodic low-rank MDPs, where both transition kernels and rewards may vary over time.
We propose a parameter-dependent policy optimization algorithm called PORTAL, and further improve PORTAL to its parameter-free version of Ada-PORTAL.
For both algorithms, we provide upper bounds on the average dynamic suboptimality gap, which show that as long as the nonstationarity is not significantly large, PORTAL and Ada-PORTAL are sample-efficient and can achieve arbitrarily small average dynamic suboptimality gap with sample complexity.
arXiv Detail & Related papers (2023-08-10T09:52:44Z) - On stable wrapper-based parameter selection method for efficient
ANN-based data-driven modeling of turbulent flows [2.0731505001992323]
This study aims to analyze and develop a reduced modeling approach based on artificial neural network (ANN) and wrapper methods.
It is found that the gradient-based subset selection to minimize the total derivative loss results in improved consistency-over-trials.
For the reduced turbulent Prandtl number model, the gradient-based subset selection improves the prediction in the validation case over the other methods.
arXiv Detail & Related papers (2023-08-04T08:26:56Z) - Stochastic Marginal Likelihood Gradients using Neural Tangent Kernels [78.6096486885658]
We introduce lower bounds to the linearized Laplace approximation of the marginal likelihood.
These bounds are amenable togradient-based optimization and allow to trade off estimation accuracy against computational complexity.
arXiv Detail & Related papers (2023-06-06T19:02:57Z) - Score-based Diffusion Models in Function Space [140.792362459734]
Diffusion models have recently emerged as a powerful framework for generative modeling.
We introduce a mathematically rigorous framework called Denoising Diffusion Operators (DDOs) for training diffusion models in function space.
We show that the corresponding discretized algorithm generates accurate samples at a fixed cost independent of the data resolution.
arXiv Detail & Related papers (2023-02-14T23:50:53Z) - An iterative multi-fidelity approach for model order reduction of
multi-dimensional input parametric PDE systems [0.0]
We propose a sampling parametric strategy for the reduction of large-scale PDE systems with multidimensional input parametric spaces.
It is achieved by exploiting low-fidelity models throughout the parametric space to sample points using an efficient sampling strategy.
Since the proposed methodology leverages the use of low-fidelity models to assimilate the solution database, it significantly reduces the computational cost in the offline stage.
arXiv Detail & Related papers (2023-01-23T15:25:58Z) - An Accelerated Doubly Stochastic Gradient Method with Faster Explicit
Model Identification [97.28167655721766]
We propose a novel doubly accelerated gradient descent (ADSGD) method for sparsity regularized loss minimization problems.
We first prove that ADSGD can achieve a linear convergence rate and lower overall computational complexity.
arXiv Detail & Related papers (2022-08-11T22:27:22Z) - Learning High-Dimensional Distributions with Latent Neural Fokker-Planck
Kernels [67.81799703916563]
We introduce new techniques to formulate the problem as solving Fokker-Planck equation in a lower-dimensional latent space.
Our proposed model consists of latent-distribution morphing, a generator and a parameterized Fokker-Planck kernel function.
arXiv Detail & Related papers (2021-05-10T17:42:01Z) - Fast Distributionally Robust Learning with Variance Reduced Min-Max
Optimization [85.84019017587477]
Distributionally robust supervised learning is emerging as a key paradigm for building reliable machine learning systems for real-world applications.
Existing algorithms for solving Wasserstein DRSL involve solving complex subproblems or fail to make use of gradients.
We revisit Wasserstein DRSL through the lens of min-max optimization and derive scalable and efficiently implementable extra-gradient algorithms.
arXiv Detail & Related papers (2021-04-27T16:56:09Z) - High-dimensional Bayesian Optimization of Personalized Cardiac Model
Parameters via an Embedded Generative Model [7.286540513944084]
We present a novel concept that embeds a generative variational auto-encoder (VAE) into the objective function of Bayesian optimization.
VAE-encoded knowledge about the generative code is used to guide the exploration of the search space.
The presented method is applied to estimating tissue excitability in a cardiac electrophysiological model.
arXiv Detail & Related papers (2020-05-15T22:14:16Z) - Optimal statistical inference in the presence of systematic
uncertainties using neural network optimization based on binned Poisson
likelihoods with nuisance parameters [0.0]
This work presents a novel strategy to construct the dimensionality reduction with neural networks for feature engineering.
We discuss how this approach results in an estimate of the parameters of interest that is close to optimal.
arXiv Detail & Related papers (2020-03-16T13:27:18Z) - Misspecification-robust likelihood-free inference in high dimensions [13.934999364767918]
We introduce an extension of the popular Bayesian optimisation based approach to approximate discrepancy functions in a probabilistic manner.
Our approach achieves computational scalability for higher dimensional parameter spaces by using separate acquisition functions and discrepancies for each parameter.
The method successfully performs computationally efficient inference in a 100-dimensional space on canonical examples and compares favourably to existing modularised ABC methods.
arXiv Detail & Related papers (2020-02-21T16:06:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.