Multi-fidelity regression using artificial neural networks: efficient
approximation of parameter-dependent output quantities
- URL: http://arxiv.org/abs/2102.13403v1
- Date: Fri, 26 Feb 2021 11:29:00 GMT
- Title: Multi-fidelity regression using artificial neural networks: efficient
approximation of parameter-dependent output quantities
- Authors: Mengwu Guo, Andrea Manzoni, Maurice Amendt, Paolo Conti, Jan S.
Hesthaven
- Abstract summary: We present the use of artificial neural networks applied to multi-fidelity regression problems.
The introduced models are compared against a traditional multi-fidelity scheme, co-kriging.
We also show an application of multi-fidelity regression to an engineering problem.
- Score: 0.17499351967216337
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Highly accurate numerical or physical experiments are often time-consuming or
expensive to obtain. When time or budget restrictions prohibit the generation
of additional data, the amount of available samples may be too limited to
provide satisfactory model results. Multi-fidelity methods deal with such
problems by incorporating information from other sources, which are ideally
well-correlated with the high-fidelity data, but can be obtained at a lower
cost. By leveraging correlations between different data sets, multi-fidelity
methods often yield superior generalization when compared to models based
solely on a small amount of high-fidelity data. In this work, we present the
use of artificial neural networks applied to multi-fidelity regression
problems. By elaborating a few existing approaches, we propose new neural
network architectures for multi-fidelity regression. The introduced models are
compared against a traditional multi-fidelity scheme, co-kriging. A collection
of artificial benchmarks are presented to measure the performance of the
analyzed models. The results show that cross-validation in combination with
Bayesian optimization consistently leads to neural network models that
outperform the co-kriging scheme. Additionally, we show an application of
multi-fidelity regression to an engineering problem. The propagation of a
pressure wave into an acoustic horn with parametrized shape and frequency is
considered, and the index of reflection intensity is approximated using the
multi-fidelity models. A finite element model and a reduced basis model are
adopted as the high- and low-fidelity, respectively. It is shown that the
multi-fidelity neural network returns outputs that achieve a comparable
accuracy to those from the expensive, full-order model, using only very few
full-order evaluations combined with a larger amount of inaccurate but cheap
evaluations of a reduced order model.
Related papers
- Practical multi-fidelity machine learning: fusion of deterministic and Bayesian models [0.34592277400656235]
Multi-fidelity machine learning methods integrate scarce, resource-intensive high-fidelity data with abundant but less accurate low-fidelity data.
We propose a practical multi-fidelity strategy for problems spanning low- and high-dimensional domains.
arXiv Detail & Related papers (2024-07-21T10:40:50Z) - Multi-Fidelity Bayesian Neural Network for Uncertainty Quantification in Transonic Aerodynamic Loads [0.0]
This paper implements a multi-fidelity Bayesian neural network model that applies transfer learning to fuse data generated by models at different fidelities.
The results demonstrate that the multi-fidelity Bayesian model outperforms the state-of-the-art Co-Kriging in terms of overall accuracy and robustness on unseen data.
arXiv Detail & Related papers (2024-07-08T07:34:35Z) - Multi-Fidelity Residual Neural Processes for Scalable Surrogate Modeling [19.60087366873302]
Multi-fidelity surrogate modeling aims to learn an accurate surrogate at the highest fidelity level.
Deep learning approaches utilize neural network based encoders and decoders to improve scalability.
We propose Multi-fidelity Residual Neural Processes (MFRNP), a novel multi-fidelity surrogate modeling framework.
arXiv Detail & Related papers (2024-02-29T04:40:25Z) - Residual Multi-Fidelity Neural Network Computing [0.0]
We present a residual multi-fidelity computational framework that formulates the correlation between models as a residual function.
We show that dramatic savings in computational cost may be achieved when the output predictions are desired to be accurate within small tolerances.
arXiv Detail & Related papers (2023-10-05T14:43:16Z) - Value function estimation using conditional diffusion models for control [62.27184818047923]
We propose a simple algorithm called Diffused Value Function (DVF)
It learns a joint multi-step model of the environment-robot interaction dynamics using a diffusion model.
We show how DVF can be used to efficiently capture the state visitation measure for multiple controllers.
arXiv Detail & Related papers (2023-06-09T18:40:55Z) - Multi-fidelity surrogate modeling using long short-term memory networks [0.0]
We introduce a novel data-driven framework of multi-fidelity surrogate modeling for parametrized, time-dependent problems.
We show that the proposed multi-fidelity LSTM networks not only improve single-fidelity regression significantly, but also outperform the multi-fidelity models based on feed-forward neural networks.
arXiv Detail & Related papers (2022-08-05T12:05:02Z) - Multi-fidelity Hierarchical Neural Processes [79.0284780825048]
Multi-fidelity surrogate modeling reduces the computational cost by fusing different simulation outputs.
We propose Multi-fidelity Hierarchical Neural Processes (MF-HNP), a unified neural latent variable model for multi-fidelity surrogate modeling.
We evaluate MF-HNP on epidemiology and climate modeling tasks, achieving competitive performance in terms of accuracy and uncertainty estimation.
arXiv Detail & Related papers (2022-06-10T04:54:13Z) - Closed-form Continuous-Depth Models [99.40335716948101]
Continuous-depth neural models rely on advanced numerical differential equation solvers.
We present a new family of models, termed Closed-form Continuous-depth (CfC) networks, that are simple to describe and at least one order of magnitude faster.
arXiv Detail & Related papers (2021-06-25T22:08:51Z) - Anomaly Detection of Time Series with Smoothness-Inducing Sequential
Variational Auto-Encoder [59.69303945834122]
We present a Smoothness-Inducing Sequential Variational Auto-Encoder (SISVAE) model for robust estimation and anomaly detection of time series.
Our model parameterizes mean and variance for each time-stamp with flexible neural networks.
We show the effectiveness of our model on both synthetic datasets and public real-world benchmarks.
arXiv Detail & Related papers (2021-02-02T06:15:15Z) - Firearm Detection via Convolutional Neural Networks: Comparing a
Semantic Segmentation Model Against End-to-End Solutions [68.8204255655161]
Threat detection of weapons and aggressive behavior from live video can be used for rapid detection and prevention of potentially deadly incidents.
One way for achieving this is through the use of artificial intelligence and, in particular, machine learning for image analysis.
We compare a traditional monolithic end-to-end deep learning model and a previously proposed model based on an ensemble of simpler neural networks detecting fire-weapons via semantic segmentation.
arXiv Detail & Related papers (2020-12-17T15:19:29Z) - Improving the Reconstruction of Disentangled Representation Learners via Multi-Stage Modeling [54.94763543386523]
Current autoencoder-based disentangled representation learning methods achieve disentanglement by penalizing the ( aggregate) posterior to encourage statistical independence of the latent factors.
We present a novel multi-stage modeling approach where the disentangled factors are first learned using a penalty-based disentangled representation learning method.
Then, the low-quality reconstruction is improved with another deep generative model that is trained to model the missing correlated latent variables.
arXiv Detail & Related papers (2020-10-25T18:51:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.