Attentive Neural Processes and Batch Bayesian Optimization for Scalable
Calibration of Physics-Informed Digital Twins
- URL: http://arxiv.org/abs/2106.15502v1
- Date: Tue, 29 Jun 2021 15:30:55 GMT
- Title: Attentive Neural Processes and Batch Bayesian Optimization for Scalable
Calibration of Physics-Informed Digital Twins
- Authors: Ankush Chakrabarty, Gordon Wichern, Christopher Laughman
- Abstract summary: Physics-informed dynamical system models form critical components of digital twins of the built environment.
We propose ANP-BBO: a scalable and parallelizable batch-wise Bayesian optimization (BBO) methodology.
- Score: 10.555398506346291
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Physics-informed dynamical system models form critical components of digital
twins of the built environment. These digital twins enable the design of
energy-efficient infrastructure, but must be properly calibrated to accurately
reflect system behavior for downstream prediction and analysis. Dynamical
system models of modern buildings are typically described by a large number of
parameters and incur significant computational expenditure during simulations.
To handle large-scale calibration of digital twins without exorbitant
simulations, we propose ANP-BBO: a scalable and parallelizable batch-wise
Bayesian optimization (BBO) methodology that leverages attentive neural
processes (ANPs).
Related papers
- Automatically Learning Hybrid Digital Twins of Dynamical Systems [56.69628749813084]
Digital Twins (DTs) simulate the states and temporal dynamics of real-world systems.
DTs often struggle to generalize to unseen conditions in data-scarce settings.
In this paper, we propose an evolutionary algorithm ($textbfHDTwinGen$) to autonomously propose, evaluate, and optimize HDTwins.
arXiv Detail & Related papers (2024-10-31T07:28:22Z) - A parametric framework for kernel-based dynamic mode decomposition using deep learning [0.0]
The proposed framework consists of two stages, offline and online.
The online stage leverages those LANDO models to generate new data at a desired time instant.
dimensionality reduction technique is applied to high-dimensional dynamical systems to reduce the computational cost of training.
arXiv Detail & Related papers (2024-09-25T11:13:50Z) - Digital Twin Calibration for Biological System-of-Systems: Cell Culture Manufacturing Process [3.0790370651488983]
We consider the cell culture process multi-scale mechanistic model, also known as Biological System-of-Systems (Bio-SoS)
This model with a modular design, composed of sub-models, allows us to integrate data across various production processes.
To calibrate the Bio-SoS digital twin, we evaluate the mean squared error of model prediction and develop a computational approach to quantify the impact of parameter estimation error of individual sub-models on the prediction accuracy of digital twin.
arXiv Detail & Related papers (2024-05-07T00:22:13Z) - A Multi-Grained Symmetric Differential Equation Model for Learning
Protein-Ligand Binding Dynamics [74.93549765488103]
In drug discovery, molecular dynamics simulation provides a powerful tool for predicting binding affinities, estimating transport properties, and exploring pocket sites.
We propose NeuralMD, the first machine learning surrogate that can facilitate numerical MD and provide accurate simulations in protein-ligand binding.
We show the efficiency and effectiveness of NeuralMD, with a 2000$times$ speedup over standard numerical MD simulation and outperforming all other ML approaches by up to 80% under the stability metric.
arXiv Detail & Related papers (2024-01-26T09:35:17Z) - Accelerating the analysis of optical quantum systems using the Koopman operator [1.2499537119440245]
prediction of photon echoes is a crucial technique for understanding optical quantum systems.
This article investigates the use of data-driven surrogate models based on the Koopman operator to accelerate this process.
arXiv Detail & Related papers (2023-10-25T12:02:04Z) - Learning Controllable Adaptive Simulation for Multi-resolution Physics [86.8993558124143]
We introduce Learning controllable Adaptive simulation for Multi-resolution Physics (LAMP) as the first full deep learning-based surrogate model.
LAMP consists of a Graph Neural Network (GNN) for learning the forward evolution, and a GNN-based actor-critic for learning the policy of spatial refinement and coarsening.
We demonstrate that our LAMP outperforms state-of-the-art deep learning surrogate models, and can adaptively trade-off computation to improve long-term prediction error.
arXiv Detail & Related papers (2023-05-01T23:20:27Z) - Quantum simulation costs for Suzuki-Trotter decomposition of quantum
many-body lattice models [0.0]
We develop a formalism to compute bounds on the number of Trotter steps needed to accurately simulate the time evolution of fermionic lattice models.
We find that, while a naive comparison of the Trotter depth first seems to favor the Hubbard model, careful consideration of the model parameters leads to a substantial advantage in favor of the t-J model.
arXiv Detail & Related papers (2023-02-09T15:32:43Z) - Generic Lithography Modeling with Dual-band Optics-Inspired Neural
Networks [52.200624127512874]
We introduce a dual-band optics-inspired neural network design that considers the optical physics underlying lithography.
Our approach yields the first published via/metal layer contour simulation at 1nm2/pixel resolution with any tile size.
We also achieve 85X simulation speedup over traditional lithography simulator with 1% accuracy loss.
arXiv Detail & Related papers (2022-03-12T08:08:50Z) - Evidence-based Prescriptive Analytics, CAUSAL Digital Twin and a
Learning Estimation Algorithm [0.0]
We describe the basics of Causality and Causal Graphs and develop a Learning Causal Digital Twin (LCDT) solution.
Since LCDT is a learning digital twin where parameters are learned online in real-time with minimal pre-configuration, the work of deploying digital twins will be significantly simplified.
arXiv Detail & Related papers (2021-04-12T21:30:53Z) - Machine learning for rapid discovery of laminar flow channel wall
modifications that enhance heat transfer [56.34005280792013]
We present a combination of accurate numerical simulations of arbitrary, flat, and non-flat channels and machine learning models predicting drag coefficient and Stanton number.
We show that convolutional neural networks (CNN) can accurately predict the target properties at a fraction of the time of numerical simulations.
arXiv Detail & Related papers (2021-01-19T16:14:02Z) - On the Sparsity of Neural Machine Translation Models [65.49762428553345]
We investigate whether redundant parameters can be reused to achieve better performance.
Experiments and analyses are systematically conducted on different datasets and NMT architectures.
arXiv Detail & Related papers (2020-10-06T11:47:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.