Deep Capsule Encoder-Decoder Network for Surrogate Modeling and
Uncertainty Quantification
- URL: http://arxiv.org/abs/2201.07753v1
- Date: Wed, 19 Jan 2022 17:45:01 GMT
- Title: Deep Capsule Encoder-Decoder Network for Surrogate Modeling and
Uncertainty Quantification
- Authors: Akshay Thakur and Souvik Chakraborty
- Abstract summary: The proposed framework is developed by adapting Capsule Network (CapsNet) architecture into image-to-image regression encoder-decoder network.
The obtained results from performance evaluation indicate that the proposed approach is accurate, efficient, and robust.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We propose a novel \textit{capsule} based deep encoder-decoder model for
surrogate modeling and uncertainty quantification of systems in mechanics from
sparse data. The proposed framework is developed by adapting Capsule Network
(CapsNet) architecture into image-to-image regression encoder-decoder network.
Specifically, the aim is to exploit the benefits of CapsNet over convolution
neural network (CNN) $-$ retaining pose and position information related to an
entity to name a few. The performance of proposed approach is illustrated by
solving an elliptic stochastic partial differential equation (SPDE), which also
governs systems in mechanics such as steady heat conduction, ground water flow
or other diffusion processes, based uncertainty quantification problem with an
input dimensionality of $1024$. However, the problem definition does not the
restrict the random diffusion field to a particular covariance structure, and
the more strenuous task of response prediction for an arbitrary diffusion field
is solved. The obtained results from performance evaluation indicate that the
proposed approach is accurate, efficient, and robust.
Related papers
- Positional Encoder Graph Quantile Neural Networks for Geographic Data [4.277516034244117]
We introduce the Positional Graph Quantile Neural Network (PE-GQNN), a novel method that integrates PE-GNNs, Quantile Neural Networks, and recalibration techniques in a fully nonparametric framework.
Experiments on benchmark datasets demonstrate that PE-GQNN significantly outperforms existing state-of-the-art methods in both predictive accuracy and uncertainty quantification.
arXiv Detail & Related papers (2024-09-27T16:02:12Z) - A deep neural network framework for dynamic multi-valued mapping estimation and its applications [3.21704928672212]
This paper introduces a deep neural network framework incorporating a generative network and a classification component.
The objective is to model the dynamic multi-valued mapping between the input and output by providing a reliable uncertainty measurement.
Experimental results show that our framework accurately estimates the dynamic multi-valued mapping with uncertainty estimation.
arXiv Detail & Related papers (2024-06-29T03:26:51Z) - Complexity Matters: Rethinking the Latent Space for Generative Modeling [65.64763873078114]
In generative modeling, numerous successful approaches leverage a low-dimensional latent space, e.g., Stable Diffusion.
In this study, we aim to shed light on this under-explored topic by rethinking the latent space from the perspective of model complexity.
arXiv Detail & Related papers (2023-07-17T07:12:29Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - A deep learning based surrogate model for stochastic simulators [0.0]
We propose a deep learning-based surrogate model for simulators.
We utilize conditional maximum mean discrepancy (CMMD) as the loss-function.
Results obtained indicate the excellent performance of the proposed approach.
arXiv Detail & Related papers (2021-10-24T11:38:47Z) - Rate Distortion Characteristic Modeling for Neural Image Compression [59.25700168404325]
End-to-end optimization capability offers neural image compression (NIC) superior lossy compression performance.
distinct models are required to be trained to reach different points in the rate-distortion (R-D) space.
We make efforts to formulate the essential mathematical functions to describe the R-D behavior of NIC using deep network and statistical modeling.
arXiv Detail & Related papers (2021-06-24T12:23:05Z) - Non-Gradient Manifold Neural Network [79.44066256794187]
Deep neural network (DNN) generally takes thousands of iterations to optimize via gradient descent.
We propose a novel manifold neural network based on non-gradient optimization.
arXiv Detail & Related papers (2021-06-15T06:39:13Z) - Bayesian Attention Belief Networks [59.183311769616466]
Attention-based neural networks have achieved state-of-the-art results on a wide range of tasks.
This paper introduces Bayesian attention belief networks, which construct a decoder network by modeling unnormalized attention weights.
We show that our method outperforms deterministic attention and state-of-the-art attention in accuracy, uncertainty estimation, generalization across domains, and adversarial attacks.
arXiv Detail & Related papers (2021-06-09T17:46:22Z) - Fixed Point Networks: Implicit Depth Models with Jacobian-Free Backprop [21.00060644438722]
A growing trend in deep learning replaces fixed depth models by approximations of the limit as network depth approaches infinity.
In particular, backpropagation through implicit depth models requires solving a Jacobian-based equation arising from the implicit function theorem.
We propose fixed point networks (FPNs) that guarantees convergence of forward propagation to a unique limit defined by network weights and input data.
arXiv Detail & Related papers (2021-03-23T19:20:33Z) - AUSN: Approximately Uniform Quantization by Adaptively Superimposing
Non-uniform Distribution for Deep Neural Networks [0.7378164273177589]
Existing uniform and non-uniform quantization methods exhibit an inherent conflict between the representing range and representing resolution.
We propose a novel quantization method to quantize the weight and activation.
The key idea is to Approximate the Uniform quantization by Adaptively Superposing multiple Non-uniform quantized values, namely AUSN.
arXiv Detail & Related papers (2020-07-08T05:10:53Z) - Uncertainty Estimation Using a Single Deep Deterministic Neural Network [66.26231423824089]
We propose a method for training a deterministic deep model that can find and reject out of distribution data points at test time with a single forward pass.
We scale training in these with a novel loss function and centroid updating scheme and match the accuracy of softmax models.
arXiv Detail & Related papers (2020-03-04T12:27:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.