Semi-supervised deep learning for high-dimensional uncertainty
quantification
- URL: http://arxiv.org/abs/2006.01010v1
- Date: Mon, 1 Jun 2020 15:15:42 GMT
- Title: Semi-supervised deep learning for high-dimensional uncertainty
quantification
- Authors: Zequn Wang and Mingyang Li
- Abstract summary: This paper presents a semi-supervised learning framework for dimension reduction and reliability analysis.
An autoencoder is first adopted for mapping the high-dimensional space into a low-dimensional latent space.
A deep feedforward neural network is utilized to learn the mapping relationship and reconstruct the latent space.
- Score: 6.910275451003041
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Conventional uncertainty quantification methods usually lacks the capability
of dealing with high-dimensional problems due to the curse of dimensionality.
This paper presents a semi-supervised learning framework for dimension
reduction and reliability analysis. An autoencoder is first adopted for mapping
the high-dimensional space into a low-dimensional latent space, which contains
a distinguishable failure surface. Then a deep feedforward neural network (DFN)
is utilized to learn the mapping relationship and reconstruct the latent space,
while the Gaussian process (GP) modeling technique is used to build the
surrogate model of the transformed limit state function. During the training
process of the DFN, the discrepancy between the actual and reconstructed latent
space is minimized through semi-supervised learning for ensuring the accuracy.
Both labeled and unlabeled samples are utilized for defining the loss function
of the DFN. Evolutionary algorithm is adopted to train the DFN, then the Monte
Carlo simulation method is used for uncertainty quantification and reliability
analysis based on the proposed framework. The effectiveness is demonstrated
through a mathematical example.
Related papers
- PseudoNeg-MAE: Self-Supervised Point Cloud Learning using Conditional Pseudo-Negative Embeddings [55.55445978692678]
PseudoNeg-MAE is a self-supervised learning framework that enhances global feature representation of point cloud mask autoencoders.
We show that PseudoNeg-MAE achieves state-of-the-art performance on the ModelNet40 and ScanObjectNN datasets.
arXiv Detail & Related papers (2024-09-24T07:57:21Z) - NeurAM: nonlinear dimensionality reduction for uncertainty quantification through neural active manifolds [0.6990493129893112]
We leverage autoencoders to discover a one-dimensional neural active manifold (NeurAM) capturing the model output variability.
We show how NeurAM can be used to obtain multifidelity sampling estimators with reduced variance.
arXiv Detail & Related papers (2024-08-07T04:27:58Z) - Ensemble Kalman Filtering Meets Gaussian Process SSM for Non-Mean-Field and Online Inference [47.460898983429374]
We introduce an ensemble Kalman filter (EnKF) into the non-mean-field (NMF) variational inference framework to approximate the posterior distribution of the latent states.
This novel marriage between EnKF and GPSSM not only eliminates the need for extensive parameterization in learning variational distributions, but also enables an interpretable, closed-form approximation of the evidence lower bound (ELBO)
We demonstrate that the resulting EnKF-aided online algorithm embodies a principled objective function by ensuring data-fitting accuracy while incorporating model regularizations to mitigate overfitting.
arXiv Detail & Related papers (2023-12-10T15:22:30Z) - Distance Preserving Machine Learning for Uncertainty Aware Accelerator
Capacitance Predictions [1.1776336798216411]
Deep neural networks and Gaussian process approximation techniques have shown promising results, but dimensionality reduction through standard deep neural network layers is not guaranteed to maintain the distance information necessary for Gaussian process models.
We build on previous work by comparing the use of the singular value decomposition against a spectral-normalized dense layer as a feature extractor for a deep neural Gaussian process approximation model.
Our model shows improved distance preservation and predicts in-distribution capacitance values with less than 1% error.
arXiv Detail & Related papers (2023-07-05T15:32:39Z) - Implicit Stochastic Gradient Descent for Training Physics-informed
Neural Networks [51.92362217307946]
Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems.
PINNs are trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features.
In this paper, we propose to employ implicit gradient descent (ISGD) method to train PINNs for improving the stability of training process.
arXiv Detail & Related papers (2023-03-03T08:17:47Z) - Validation Diagnostics for SBI algorithms based on Normalizing Flows [55.41644538483948]
This work proposes easy to interpret validation diagnostics for multi-dimensional conditional (posterior) density estimators based on NF.
It also offers theoretical guarantees based on results of local consistency.
This work should help the design of better specified models or drive the development of novel SBI-algorithms.
arXiv Detail & Related papers (2022-11-17T15:48:06Z) - Bayesian deep learning framework for uncertainty quantification in high
dimensions [6.282068591820945]
We develop a novel deep learning method for uncertainty quantification in partial differential equations based on Bayesian neural network (BNN) and Hamiltonian Monte Carlo (HMC)
A BNN efficiently learns the posterior distribution of the parameters in deep neural networks by performing Bayesian inference on the network parameters.
The posterior distribution is efficiently sampled using HMC to quantify uncertainties in the system.
arXiv Detail & Related papers (2022-10-21T05:20:06Z) - Deep subspace encoders for continuous-time state-space identification [0.0]
Continuous-time (CT) models have shown an improved sample efficiency during learning.
The multifaceted CT state-space model identification problem remains to be solved in full.
This paper presents a novel estimation method that includes these aspects and that is able to obtain state-of-the-art results.
arXiv Detail & Related papers (2022-04-20T11:55:17Z) - Conditional-Flow NeRF: Accurate 3D Modelling with Reliable Uncertainty
Quantification [44.598503284186336]
Conditional-Flow NeRF (CF-NeRF) is a novel probabilistic framework to incorporate uncertainty quantification into NeRF-based approaches.
CF-NeRF learns a distribution over all possible radiance fields modelling which is used to quantify the uncertainty associated with the modelled scene.
arXiv Detail & Related papers (2022-03-18T23:26:20Z) - Robustness Analysis of Neural Networks via Efficient Partitioning with
Applications in Control Systems [45.35808135708894]
Neural networks (NNs) are now routinely implemented on systems that must operate in uncertain environments.
This paper unifies propagation and partition approaches to provide a family of robustness analysis algorithms.
New partitioning techniques are aware of their current bound estimates and desired boundary shape.
arXiv Detail & Related papers (2020-10-01T16:51:36Z) - Higher-order Quasi-Monte Carlo Training of Deep Neural Networks [0.0]
We present a novel algorithmic approach and an error analysis leveraging Quasi-Monte Carlo points for training deep neural network (DNN) surrogates of Data-to-Observable (DtO) maps in engineering design.
arXiv Detail & Related papers (2020-09-06T11:31:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.