Stochastic spectral embedding
- URL: http://arxiv.org/abs/2004.04480v2
- Date: Fri, 26 Jun 2020 08:02:44 GMT
- Title: Stochastic spectral embedding
- Authors: S. Marelli, P.-R. Wagner, C. Lataniotis and B. Sudret
- Abstract summary: We propose a novel sequential adaptive surrogate modeling method based on "stochastic spectral embedding" (SSE)
We show how the method compares favorably against state-of-the-art sparse chaos expansions on a set of models with different complexity and input dimension.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Constructing approximations that can accurately mimic the behavior of complex
models at reduced computational costs is an important aspect of uncertainty
quantification. Despite their flexibility and efficiency, classical surrogate
models such as Kriging or polynomial chaos expansions tend to struggle with
highly non-linear, localized or non-stationary computational models. We hereby
propose a novel sequential adaptive surrogate modeling method based on
recursively embedding locally spectral expansions. It is achieved by means of
disjoint recursive partitioning of the input domain, which consists in
sequentially splitting the latter into smaller subdomains, and constructing a
simpler local spectral expansions in each, exploiting the trade-off complexity
vs. locality. The resulting expansion, which we refer to as "stochastic
spectral embedding" (SSE), is a piece-wise continuous approximation of the
model response that shows promising approximation capabilities, and good
scaling with both the problem dimension and the size of the training set. We
finally show how the method compares favorably against state-of-the-art sparse
polynomial chaos expansions on a set of models with different complexity and
input dimension.
Related papers
- Nonstationary Sparse Spectral Permanental Process [24.10531062895964]
We propose a novel approach utilizing the sparse spectral representation of nonstationary kernels.
This technique relaxes the constraints on kernel types and stationarity, allowing for more flexible modeling.
Experimental results on both synthetic and real-world datasets demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2024-10-04T16:40:56Z) - Scaling Riemannian Diffusion Models [68.52820280448991]
We show that our method enables us to scale to high dimensional tasks on nontrivial manifold.
We model QCD densities on $SU(n)$ lattices and contrastively learned embeddings on high dimensional hyperspheres.
arXiv Detail & Related papers (2023-10-30T21:27:53Z) - Multi-Response Heteroscedastic Gaussian Process Models and Their
Inference [1.52292571922932]
We propose a novel framework for the modeling of heteroscedastic covariance functions.
We employ variational inference to approximate the posterior and facilitate posterior predictive modeling.
We show that our proposed framework offers a robust and versatile tool for a wide array of applications.
arXiv Detail & Related papers (2023-08-29T15:06:47Z) - Geometric Neural Diffusion Processes [55.891428654434634]
We extend the framework of diffusion models to incorporate a series of geometric priors in infinite-dimension modelling.
We show that with these conditions, the generative functional model admits the same symmetry.
arXiv Detail & Related papers (2023-07-11T16:51:38Z) - Active Learning-based Domain Adaptive Localized Polynomial Chaos
Expansion [0.0]
The paper presents a novel methodology to build surrogate models of complicated functions by an active learning-based sequential decomposition of the input random space and construction of localized chaos expansions.
The approach utilizes sequential decomposition of the input random space into smaller sub-domains approximated by low-order expansions.
arXiv Detail & Related papers (2023-01-31T13:49:52Z) - DIFFormer: Scalable (Graph) Transformers Induced by Energy Constrained
Diffusion [66.21290235237808]
We introduce an energy constrained diffusion model which encodes a batch of instances from a dataset into evolutionary states.
We provide rigorous theory that implies closed-form optimal estimates for the pairwise diffusion strength among arbitrary instance pairs.
Experiments highlight the wide applicability of our model as a general-purpose encoder backbone with superior performance in various tasks.
arXiv Detail & Related papers (2023-01-23T15:18:54Z) - The Dynamics of Riemannian Robbins-Monro Algorithms [101.29301565229265]
We propose a family of Riemannian algorithms generalizing and extending the seminal approximation framework of Robbins and Monro.
Compared to their Euclidean counterparts, Riemannian algorithms are much less understood due to lack of a global linear structure on the manifold.
We provide a general template of almost sure convergence results that mirrors and extends the existing theory for Euclidean Robbins-Monro schemes.
arXiv Detail & Related papers (2022-06-14T12:30:11Z) - Reinforcement Learning from Partial Observation: Linear Function Approximation with Provable Sample Efficiency [111.83670279016599]
We study reinforcement learning for partially observed decision processes (POMDPs) with infinite observation and state spaces.
We make the first attempt at partial observability and function approximation for a class of POMDPs with a linear structure.
arXiv Detail & Related papers (2022-04-20T21:15:38Z) - A Variational Inference Approach to Inverse Problems with Gamma
Hyperpriors [60.489902135153415]
This paper introduces a variational iterative alternating scheme for hierarchical inverse problems with gamma hyperpriors.
The proposed variational inference approach yields accurate reconstruction, provides meaningful uncertainty quantification, and is easy to implement.
arXiv Detail & Related papers (2021-11-26T06:33:29Z) - Heterogeneous Tensor Mixture Models in High Dimensions [5.656785831541303]
We consider the problem of jointly introducing a flexible high-dimensional tensor mixture model with heterogeneous covariances.
We show that our method converges geometrically to a neighborhood that is statistical of the true parameter.
Our analysis identifies important brain regions for diagnosis in an autism spectrum disorder.
arXiv Detail & Related papers (2021-04-15T21:06:16Z) - Posterior-Aided Regularization for Likelihood-Free Inference [23.708122045184698]
Posterior-Aided Regularization (PAR) is applicable to learning the density estimator, regardless of the model structure.
We provide a unified estimation method of PAR to estimate both reverse KL term and mutual information term with a single neural network.
arXiv Detail & Related papers (2021-02-15T16:59:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.