Application of probabilistic modeling and automated machine learning
framework for high-dimensional stress field
- URL: http://arxiv.org/abs/2303.16869v2
- Date: Tue, 11 Apr 2023 19:50:48 GMT
- Title: Application of probabilistic modeling and automated machine learning
framework for high-dimensional stress field
- Authors: Lele Luan, Nesar Ramachandra, Sandipp Krishnan Ravi, Anindya Bhaduri,
Piyush Pandita, Prasanna Balaprakash, Mihai Anitescu, Changjie Sun, Liping
Wang
- Abstract summary: We propose an end-to-end approach that maps a high-dimensional image like input to an output of high dimensionality or its key statistics.
Our approach uses two main framework that perform three steps: a) reduce the input and output from a high-dimensional space to a reduced or low-dimensional space, b) model the input-output relationship in the low-dimensional space, and c) enable the incorporation of domain-specific physical constraints as masks.
- Score: 1.073039474000799
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modern computational methods, involving highly sophisticated mathematical
formulations, enable several tasks like modeling complex physical phenomenon,
predicting key properties and design optimization. The higher fidelity in these
computer models makes it computationally intensive to query them hundreds of
times for optimization and one usually relies on a simplified model albeit at
the cost of losing predictive accuracy and precision. Towards this, data-driven
surrogate modeling methods have shown a lot of promise in emulating the
behavior of the expensive computer models. However, a major bottleneck in such
methods is the inability to deal with high input dimensionality and the need
for relatively large datasets. With such problems, the input and output
quantity of interest are tensors of high dimensionality. Commonly used
surrogate modeling methods for such problems, suffer from requirements like
high number of computational evaluations that precludes one from performing
other numerical tasks like uncertainty quantification and statistical analysis.
In this work, we propose an end-to-end approach that maps a high-dimensional
image like input to an output of high dimensionality or its key statistics. Our
approach uses two main framework that perform three steps: a) reduce the input
and output from a high-dimensional space to a reduced or low-dimensional space,
b) model the input-output relationship in the low-dimensional space, and c)
enable the incorporation of domain-specific physical constraints as masks. In
order to accomplish the task of reducing input dimensionality we leverage
principal component analysis, that is coupled with two surrogate modeling
methods namely: a) Bayesian hybrid modeling, and b) DeepHyper's deep neural
networks. We demonstrate the applicability of the approach on a problem of a
linear elastic stress field data.
Related papers
- Sensitivity analysis using the Metamodel of Optimal Prognosis [0.0]
In real case applications within the virtual prototyping process, it is not always possible to reduce the complexity of the physical models.
We present an automatic approach for the selection of the optimal suitable meta-model for the actual problem.
arXiv Detail & Related papers (2024-08-07T07:09:06Z) - Multi-GPU Approach for Training of Graph ML Models on large CFD Meshes [0.0]
Mesh-based numerical solvers are an important part in many design tool chains.
Machine Learning based surrogate models are fast in predicting approximate solutions but often lack accuracy.
This paper scales a state-of-the-art surrogate model from the domain of graph-based machine learning to industry-relevant mesh sizes.
arXiv Detail & Related papers (2023-07-25T15:49:25Z) - Learning Controllable Adaptive Simulation for Multi-resolution Physics [86.8993558124143]
We introduce Learning controllable Adaptive simulation for Multi-resolution Physics (LAMP) as the first full deep learning-based surrogate model.
LAMP consists of a Graph Neural Network (GNN) for learning the forward evolution, and a GNN-based actor-critic for learning the policy of spatial refinement and coarsening.
We demonstrate that our LAMP outperforms state-of-the-art deep learning surrogate models, and can adaptively trade-off computation to improve long-term prediction error.
arXiv Detail & Related papers (2023-05-01T23:20:27Z) - Low-dimensional Data-based Surrogate Model of a Continuum-mechanical
Musculoskeletal System Based on Non-intrusive Model Order Reduction [0.0]
Non-traditional approaches such as surrogate modeling using data-driven model order reduction are used to make high-fidelity models more widely available anyway.
We demonstrate the benefits of the surrogate modeling approach on a complex finite element model of a human upper-arm.
arXiv Detail & Related papers (2023-02-13T17:14:34Z) - On Robust Numerical Solver for ODE via Self-Attention Mechanism [82.95493796476767]
We explore training efficient and robust AI-enhanced numerical solvers with a small data size by mitigating intrinsic noise disturbances.
We first analyze the ability of the self-attention mechanism to regulate noise in supervised learning and then propose a simple-yet-effective numerical solver, Attr, which introduces an additive self-attention mechanism to the numerical solution of differential equations.
arXiv Detail & Related papers (2023-02-05T01:39:21Z) - Multi-fidelity Hierarchical Neural Processes [79.0284780825048]
Multi-fidelity surrogate modeling reduces the computational cost by fusing different simulation outputs.
We propose Multi-fidelity Hierarchical Neural Processes (MF-HNP), a unified neural latent variable model for multi-fidelity surrogate modeling.
We evaluate MF-HNP on epidemiology and climate modeling tasks, achieving competitive performance in terms of accuracy and uncertainty estimation.
arXiv Detail & Related papers (2022-06-10T04:54:13Z) - Data-driven Uncertainty Quantification in Computational Human Head
Models [0.6745502291821954]
Modern biofidelic head model simulations are associated with very high computational cost and high-dimensional inputs and outputs.
In this study, a two-stage, data-driven manifold learning-based framework is proposed for uncertainty quantification (UQ) of computational head models.
It is demonstrated that the surrogate models provide highly accurate approximations of the computational model while significantly reducing the computational cost.
arXiv Detail & Related papers (2021-10-29T05:42:31Z) - Conservative Objective Models for Effective Offline Model-Based
Optimization [78.19085445065845]
Computational design problems arise in a number of settings, from synthetic biology to computer architectures.
We propose a method that learns a model of the objective function that lower bounds the actual value of the ground-truth objective on out-of-distribution inputs.
COMs are simple to implement and outperform a number of existing methods on a wide range of MBO problems.
arXiv Detail & Related papers (2021-07-14T17:55:28Z) - Model-data-driven constitutive responses: application to a multiscale
computational framework [0.0]
A hybrid methodology is presented which combines classical laws (model-based), a data-driven correction component, and computational multiscale approaches.
A model-based material representation is locally improved with data from lower scales obtained by means of a nonlinear numerical homogenization procedure.
In the proposed approach, both model and data play a fundamental role allowing for the synergistic integration between a physics-based response and a machine learning black-box.
arXiv Detail & Related papers (2021-04-06T16:34:46Z) - Offline Model-Based Optimization via Normalized Maximum Likelihood
Estimation [101.22379613810881]
We consider data-driven optimization problems where one must maximize a function given only queries at a fixed set of points.
This problem setting emerges in many domains where function evaluation is a complex and expensive process.
We propose a tractable approximation that allows us to scale our method to high-capacity neural network models.
arXiv Detail & Related papers (2021-02-16T06:04:27Z) - Secrets of 3D Implicit Object Shape Reconstruction in the Wild [92.5554695397653]
Reconstructing high-fidelity 3D objects from sparse, partial observation is crucial for various applications in computer vision, robotics, and graphics.
Recent neural implicit modeling methods show promising results on synthetic or dense datasets.
But, they perform poorly on real-world data that is sparse and noisy.
This paper analyzes the root cause of such deficient performance of a popular neural implicit model.
arXiv Detail & Related papers (2021-01-18T03:24:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.