Sparse Bayesian Learning for Complex-Valued Rational Approximations
- URL: http://arxiv.org/abs/2206.02523v1
- Date: Mon, 6 Jun 2022 12:06:13 GMT
- Title: Sparse Bayesian Learning for Complex-Valued Rational Approximations
- Authors: Felix Schneider and Iason Papaioannou and Gerhard M\"uller
- Abstract summary: Surrogate models are used to alleviate the computational burden in engineering tasks.
These models show a strongly non-linear dependence on their input parameters.
We apply a sparse learning approach to the rational approximation.
- Score: 0.03392423750246091
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Surrogate models are used to alleviate the computational burden in
engineering tasks, which require the repeated evaluation of computationally
demanding models of physical systems, such as the efficient propagation of
uncertainties. For models that show a strongly non-linear dependence on their
input parameters, standard surrogate techniques, such as polynomial chaos
expansion, are not sufficient to obtain an accurate representation of the
original model response. Through applying a rational approximation instead, the
approximation error can be efficiently reduced for models whose non-linearity
is accurately described through a rational function. Specifically, our aim is
to approximate complex-valued models. A common approach to obtain the
coefficients in the surrogate is to minimize the sample-based error between
model and surrogate in the least-square sense. In order to obtain an accurate
representation of the original model and to avoid overfitting, the sample set
has be two to three times the number of polynomial terms in the expansion. For
models that require a high polynomial degree or are high-dimensional in terms
of their input parameters, this number often exceeds the affordable
computational cost. To overcome this issue, we apply a sparse Bayesian learning
approach to the rational approximation. Through a specific prior distribution
structure, sparsity is induced in the coefficients of the surrogate model. The
denominator polynomial coefficients as well as the hyperparameters of the
problem are determined through a type-II-maximum likelihood approach. We apply
a quasi-Newton gradient-descent algorithm in order to find the optimal
denominator coefficients and derive the required gradients through application
of $\mathbb{CR}$-calculus.
Related papers
- Multivariate root-n-consistent smoothing parameter free matching estimators and estimators of inverse density weighted expectations [51.000851088730684]
We develop novel modifications of nearest-neighbor and matching estimators which converge at the parametric $sqrt n $-rate.
We stress that our estimators do not involve nonparametric function estimators and in particular do not rely on sample-size dependent parameters smoothing.
arXiv Detail & Related papers (2024-07-11T13:28:34Z) - Variational Bayesian surrogate modelling with application to robust design optimisation [0.9626666671366836]
Surrogate models provide a quick-to-evaluate approximation to complex computational models.
We consider Bayesian inference for constructing statistical surrogates with input uncertainties and dimensionality reduction.
We demonstrate intrinsic and robust structural optimisation problems where cost functions depend on a weighted sum of the mean and standard deviation of model outputs.
arXiv Detail & Related papers (2024-04-23T09:22:35Z) - Gradient-based bilevel optimization for multi-penalty Ridge regression
through matrix differential calculus [0.46040036610482665]
We introduce a gradient-based approach to the problem of linear regression with l2-regularization.
We show that our approach outperforms LASSO, Ridge, and Elastic Net regression.
The analytical of the gradient proves to be more efficient in terms of computational time compared to automatic differentiation.
arXiv Detail & Related papers (2023-11-23T20:03:51Z) - An Optimization-based Deep Equilibrium Model for Hyperspectral Image
Deconvolution with Convergence Guarantees [71.57324258813675]
We propose a novel methodology for addressing the hyperspectral image deconvolution problem.
A new optimization problem is formulated, leveraging a learnable regularizer in the form of a neural network.
The derived iterative solver is then expressed as a fixed-point calculation problem within the Deep Equilibrium framework.
arXiv Detail & Related papers (2023-06-10T08:25:16Z) - GenMod: A generative modeling approach for spectral representation of
PDEs with random inputs [0.0]
We present an approach where we assume the coefficients are close to the range of a generative model that maps from a low to a high dimensional space of coefficients.
Using PDE theory on decay rates, we construct an explicit generative model that predicts the chaos magnitudes.
arXiv Detail & Related papers (2022-01-31T02:56:20Z) - Inverting brain grey matter models with likelihood-free inference: a
tool for trustable cytoarchitecture measurements [62.997667081978825]
characterisation of the brain grey matter cytoarchitecture with quantitative sensitivity to soma density and volume remains an unsolved challenge in dMRI.
We propose a new forward model, specifically a new system of equations, requiring a few relatively sparse b-shells.
We then apply modern tools from Bayesian analysis known as likelihood-free inference (LFI) to invert our proposed model.
arXiv Detail & Related papers (2021-11-15T09:08:27Z) - Variational Inference with NoFAS: Normalizing Flow with Adaptive
Surrogate for Computationally Expensive Models [7.217783736464403]
Use of sampling-based approaches such as Markov chain Monte Carlo may become intractable when each likelihood evaluation is computationally expensive.
New approaches combining variational inference with normalizing flow are characterized by a computational cost that grows only linearly with the dimensionality of the latent variable space.
We propose Normalizing Flow with Adaptive Surrogate (NoFAS), an optimization strategy that alternatively updates the normalizing flow parameters and the weights of a neural network surrogate model.
arXiv Detail & Related papers (2021-08-28T14:31:45Z) - Surrogate Models for Optimization of Dynamical Systems [0.0]
This paper provides a smart data driven mechanism to construct low dimensional surrogate models.
These surrogate models reduce the computational time for solution of the complex optimization problems.
arXiv Detail & Related papers (2021-01-22T14:09:30Z) - Goal-directed Generation of Discrete Structures with Conditional
Generative Models [85.51463588099556]
We introduce a novel approach to directly optimize a reinforcement learning objective, maximizing an expected reward.
We test our methodology on two tasks: generating molecules with user-defined properties and identifying short python expressions which evaluate to a given target value.
arXiv Detail & Related papers (2020-10-05T20:03:13Z) - Understanding Implicit Regularization in Over-Parameterized Single Index
Model [55.41685740015095]
We design regularization-free algorithms for the high-dimensional single index model.
We provide theoretical guarantees for the induced implicit regularization phenomenon.
arXiv Detail & Related papers (2020-07-16T13:27:47Z) - Instability, Computational Efficiency and Statistical Accuracy [101.32305022521024]
We develop a framework that yields statistical accuracy based on interplay between the deterministic convergence rate of the algorithm at the population level, and its degree of (instability) when applied to an empirical object based on $n$ samples.
We provide applications of our general results to several concrete classes of models, including Gaussian mixture estimation, non-linear regression models, and informative non-response models.
arXiv Detail & Related papers (2020-05-22T22:30:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.