Multifidelity data fusion in convolutional encoder/decoder networks
- URL: http://arxiv.org/abs/2205.05187v1
- Date: Tue, 10 May 2022 21:51:22 GMT
- Title: Multifidelity data fusion in convolutional encoder/decoder networks
- Authors: Lauren Partin, Gianluca Geraci, Ahmad Rushdi, Michael S. Eldred and
Daniele E. Schiavazzi
- Abstract summary: We analyze the regression accuracy of convolutional neural networks assembled from encoders, decoders and skip connections.
We demonstrate their accuracy when trained on a few high-fidelity and many low-fidelity data.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We analyze the regression accuracy of convolutional neural networks assembled
from encoders, decoders and skip connections and trained with multifidelity
data. Besides requiring significantly less trainable parameters than equivalent
fully connected networks, encoder, decoder, encoder-decoder or decoder-encoder
architectures can learn the mapping between inputs to outputs of arbitrary
dimensionality. We demonstrate their accuracy when trained on a few
high-fidelity and many low-fidelity data generated from models ranging from
one-dimensional functions to Poisson equation solvers in two-dimensions. We
finally discuss a number of implementation choices that improve the reliability
of the uncertainty estimates generated by Monte Carlo DropBlocks, and compare
uncertainty estimates among low-, high- and multifidelity approaches.
Related papers
- Multi-Fidelity Bayesian Neural Network for Uncertainty Quantification in Transonic Aerodynamic Loads [0.0]
This paper implements a multi-fidelity Bayesian neural network model that applies transfer learning to fuse data generated by models at different fidelities.
The results demonstrate that the multi-fidelity Bayesian model outperforms the state-of-the-art Co-Kriging in terms of overall accuracy and robustness on unseen data.
arXiv Detail & Related papers (2024-07-08T07:34:35Z) - Multi-Fidelity Residual Neural Processes for Scalable Surrogate Modeling [19.60087366873302]
Multi-fidelity surrogate modeling aims to learn an accurate surrogate at the highest fidelity level.
Deep learning approaches utilize neural network based encoders and decoders to improve scalability.
We propose Multi-fidelity Residual Neural Processes (MFRNP), a novel multi-fidelity surrogate modeling framework.
arXiv Detail & Related papers (2024-02-29T04:40:25Z) - Residual Multi-Fidelity Neural Network Computing [0.0]
We present a residual multi-fidelity computational framework that formulates the correlation between models as a residual function.
We show that dramatic savings in computational cost may be achieved when the output predictions are desired to be accurate within small tolerances.
arXiv Detail & Related papers (2023-10-05T14:43:16Z) - Complexity Matters: Rethinking the Latent Space for Generative Modeling [65.64763873078114]
In generative modeling, numerous successful approaches leverage a low-dimensional latent space, e.g., Stable Diffusion.
In this study, we aim to shed light on this under-explored topic by rethinking the latent space from the perspective of model complexity.
arXiv Detail & Related papers (2023-07-17T07:12:29Z) - Neural network decoder for near-term surface-code experiments [0.7100520098029438]
Neural-network decoders can achieve a lower logical error rate compared to conventional decoders.
These decoders require no prior information about the physical error rates, making them highly adaptable.
arXiv Detail & Related papers (2023-07-06T20:31:25Z) - The END: An Equivariant Neural Decoder for Quantum Error Correction [73.4384623973809]
We introduce a data efficient neural decoder that exploits the symmetries of the problem.
We propose a novel equivariant architecture that achieves state of the art accuracy compared to previous neural decoders.
arXiv Detail & Related papers (2023-04-14T19:46:39Z) - NAF: Neural Attenuation Fields for Sparse-View CBCT Reconstruction [79.13750275141139]
This paper proposes a novel and fast self-supervised solution for sparse-view CBCT reconstruction.
The desired attenuation coefficients are represented as a continuous function of 3D spatial coordinates, parameterized by a fully-connected deep neural network.
A learning-based encoder entailing hash coding is adopted to help the network capture high-frequency details.
arXiv Detail & Related papers (2022-09-29T04:06:00Z) - Variational Autoencoders: A Harmonic Perspective [79.49579654743341]
We study Variational Autoencoders (VAEs) from the perspective of harmonic analysis.
We show that the encoder variance of a VAE controls the frequency content of the functions parameterised by the VAE encoder and decoder neural networks.
arXiv Detail & Related papers (2021-05-31T10:39:25Z) - A Learning-Based Approach to Address Complexity-Reliability Tradeoff in
OS Decoders [32.35297363281744]
We show that using artificial neural networks to predict the required order of an ordered statistics based decoder helps in reducing the average complexity and hence the latency of the decoder.
arXiv Detail & Related papers (2021-03-05T18:22:20Z) - Suppress and Balance: A Simple Gated Network for Salient Object
Detection [89.88222217065858]
We propose a simple gated network (GateNet) to solve both issues at once.
With the help of multilevel gate units, the valuable context information from the encoder can be optimally transmitted to the decoder.
In addition, we adopt the atrous spatial pyramid pooling based on the proposed "Fold" operation (Fold-ASPP) to accurately localize salient objects of various scales.
arXiv Detail & Related papers (2020-07-16T02:00:53Z) - On the Encoder-Decoder Incompatibility in Variational Text Modeling and
Beyond [82.18770740564642]
Variational autoencoders (VAEs) combine latent variables with amortized variational inference.
We observe the encoder-decoder incompatibility that leads to poor parameterizations of the data manifold.
We propose Coupled-VAE, which couples a VAE model with a deterministic autoencoder with the same structure.
arXiv Detail & Related papers (2020-04-20T10:34:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.