Efficient Characterization of Dynamic Response Variation Using
Multi-Fidelity Data Fusion through Composite Neural Network
- URL: http://arxiv.org/abs/2005.03213v1
- Date: Thu, 7 May 2020 02:44:03 GMT
- Title: Efficient Characterization of Dynamic Response Variation Using
Multi-Fidelity Data Fusion through Composite Neural Network
- Authors: Kai Zhou, Jiong Tang
- Abstract summary: We take advantage of the multi-level response prediction opportunity in structural dynamic analysis.
We formulate a composite neural network fusion approach that can fully utilize the multi-level, heterogeneous datasets obtained.
- Score: 9.446974144044733
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Uncertainties in a structure is inevitable, which generally lead to variation
in dynamic response predictions. For a complex structure, brute force Monte
Carlo simulation for response variation analysis is infeasible since one single
run may already be computationally costly. Data driven meta-modeling approaches
have thus been explored to facilitate efficient emulation and statistical
inference. The performance of a meta-model hinges upon both the quality and
quantity of training dataset. In actual practice, however, high-fidelity data
acquired from high-dimensional finite element simulation or experiment are
generally scarce, which poses significant challenge to meta-model
establishment. In this research, we take advantage of the multi-level response
prediction opportunity in structural dynamic analysis, i.e., acquiring rapidly
a large amount of low-fidelity data from reduced-order modeling, and acquiring
accurately a small amount of high-fidelity data from full-scale finite element
analysis. Specifically, we formulate a composite neural network fusion approach
that can fully utilize the multi-level, heterogeneous datasets obtained. It
implicitly identifies the correlation of the low- and high-fidelity datasets,
which yields improved accuracy when compared with the state-of-the-art.
Comprehensive investigations using frequency response variation
characterization as case example are carried out to demonstrate the
performance.
Related papers
- Diffusion posterior sampling for simulation-based inference in tall data settings [53.17563688225137]
Simulation-based inference ( SBI) is capable of approximating the posterior distribution that relates input parameters to a given observation.
In this work, we consider a tall data extension in which multiple observations are available to better infer the parameters of the model.
We compare our method to recently proposed competing approaches on various numerical experiments and demonstrate its superiority in terms of numerical stability and computational cost.
arXiv Detail & Related papers (2024-04-11T09:23:36Z) - Learning Latent Dynamics via Invariant Decomposition and
(Spatio-)Temporal Transformers [0.6767885381740952]
We propose a method for learning dynamical systems from high-dimensional empirical data.
We focus on the setting in which data are available from multiple different instances of a system.
We study behaviour through simple theoretical analyses and extensive experiments on synthetic and real-world datasets.
arXiv Detail & Related papers (2023-06-21T07:52:07Z) - Iterative self-transfer learning: A general methodology for response
time-history prediction based on small dataset [0.0]
An iterative self-transfer learningmethod for training neural networks based on small datasets is proposed in this study.
The results show that the proposed method can improve the model performance by near an order of magnitude on small datasets.
arXiv Detail & Related papers (2023-06-14T18:48:04Z) - Capturing dynamical correlations using implicit neural representations [85.66456606776552]
We develop an artificial intelligence framework which combines a neural network trained to mimic simulated data from a model Hamiltonian with automatic differentiation to recover unknown parameters from experimental data.
In doing so, we illustrate the ability to build and train a differentiable model only once, which then can be applied in real-time to multi-dimensional scattering data.
arXiv Detail & Related papers (2023-04-08T07:55:36Z) - Variational Hierarchical Mixtures for Probabilistic Learning of Inverse
Dynamics [20.953728061894044]
Well-calibrated probabilistic regression models are a crucial learning component in robotics applications as datasets grow rapidly and tasks become more complex.
We consider a probabilistic hierarchical modeling paradigm that combines the benefits of both worlds to deliver computationally efficient representations with inherent complexity regularization.
We derive two efficient variational inference techniques to learn these representations and highlight the advantages of hierarchical infinite local regression models.
arXiv Detail & Related papers (2022-11-02T13:54:07Z) - Dynamic Latent Separation for Deep Learning [67.62190501599176]
A core problem in machine learning is to learn expressive latent variables for model prediction on complex data.
Here, we develop an approach that improves expressiveness, provides partial interpretation, and is not restricted to specific applications.
arXiv Detail & Related papers (2022-10-07T17:56:53Z) - Multi-fidelity Hierarchical Neural Processes [79.0284780825048]
Multi-fidelity surrogate modeling reduces the computational cost by fusing different simulation outputs.
We propose Multi-fidelity Hierarchical Neural Processes (MF-HNP), a unified neural latent variable model for multi-fidelity surrogate modeling.
We evaluate MF-HNP on epidemiology and climate modeling tasks, achieving competitive performance in terms of accuracy and uncertainty estimation.
arXiv Detail & Related papers (2022-06-10T04:54:13Z) - Amortized Inference for Causal Structure Learning [72.84105256353801]
Learning causal structure poses a search problem that typically involves evaluating structures using a score or independence test.
We train a variational inference model to predict the causal structure from observational/interventional data.
Our models exhibit robust generalization capabilities under substantial distribution shift.
arXiv Detail & Related papers (2022-05-25T17:37:08Z) - Complexity Measures for Multi-objective Symbolic Regression [2.4087148947930634]
Multi-objective symbolic regression has the advantage that while the accuracy of the learned models is maximized, the complexity is automatically adapted.
We study which complexity measures are most appropriately used in symbolic regression when performing multi- objective optimization with NSGA-II.
arXiv Detail & Related papers (2021-09-01T08:22:41Z) - Model-data-driven constitutive responses: application to a multiscale
computational framework [0.0]
A hybrid methodology is presented which combines classical laws (model-based), a data-driven correction component, and computational multiscale approaches.
A model-based material representation is locally improved with data from lower scales obtained by means of a nonlinear numerical homogenization procedure.
In the proposed approach, both model and data play a fundamental role allowing for the synergistic integration between a physics-based response and a machine learning black-box.
arXiv Detail & Related papers (2021-04-06T16:34:46Z) - Multiplicative noise and heavy tails in stochastic optimization [62.993432503309485]
empirical optimization is central to modern machine learning, but its role in its success is still unclear.
We show that it commonly arises in parameters of discrete multiplicative noise due to variance.
A detailed analysis is conducted in which we describe on key factors, including recent step size, and data, all exhibit similar results on state-of-the-art neural network models.
arXiv Detail & Related papers (2020-06-11T09:58:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.