Active-Learning-Driven Surrogate Modeling for Efficient Simulation of
Parametric Nonlinear Systems
- URL: http://arxiv.org/abs/2306.06174v1
- Date: Fri, 9 Jun 2023 18:01:14 GMT
- Title: Active-Learning-Driven Surrogate Modeling for Efficient Simulation of
Parametric Nonlinear Systems
- Authors: Harshit Kapadia, Lihong Feng, Peter Benner
- Abstract summary: In absence of governing equations, we need to construct the parametric reduced-order surrogate model in a non-intrusive fashion.
Our work provides a non-intrusive optimality criterion to efficiently populate the parameter snapshots.
We propose an active-learning-driven surrogate model using kernel-based shallow neural networks.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: When repeated evaluations for varying parameter configurations of a
high-fidelity physical model are required, surrogate modeling techniques based
on model order reduction are desired. In absence of the governing equations
describing the dynamics, we need to construct the parametric reduced-order
surrogate model in a non-intrusive fashion. In this setting, the usual
residual-based error estimate for optimal parameter sampling associated with
the reduced basis method is not directly available. Our work provides a
non-intrusive optimality criterion to efficiently populate the parameter
snapshots, thereby, enabling us to effectively construct a parametric surrogate
model. We consider separate parameter-specific proper orthogonal decomposition
(POD) subspaces and propose an active-learning-driven surrogate model using
kernel-based shallow neural networks, abbreviated as ActLearn-POD-KSNN
surrogate model. To demonstrate the validity of our proposed ideas, we present
numerical experiments using two physical models, namely Burgers' equation and
shallow water equations. Both the models have mixed -- convective and diffusive
-- effects within their respective parameter domains, with each of them
dominating in certain regions. The proposed ActLearn-POD-KSNN surrogate model
efficiently predicts the solution at new parameter locations, even for a
setting with multiple interacting shock profiles.
Related papers
- A parametric framework for kernel-based dynamic mode decomposition using deep learning [0.0]
The proposed framework consists of two stages, offline and online.
The online stage leverages those LANDO models to generate new data at a desired time instant.
dimensionality reduction technique is applied to high-dimensional dynamical systems to reduce the computational cost of training.
arXiv Detail & Related papers (2024-09-25T11:13:50Z) - SMILE: Zero-Shot Sparse Mixture of Low-Rank Experts Construction From Pre-Trained Foundation Models [85.67096251281191]
We present an innovative approach to model fusion called zero-shot Sparse MIxture of Low-rank Experts (SMILE) construction.
SMILE allows for the upscaling of source models into an MoE model without extra data or further training.
We conduct extensive experiments across diverse scenarios, such as image classification and text generation tasks, using full fine-tuning and LoRA fine-tuning.
arXiv Detail & Related papers (2024-08-19T17:32:15Z) - Spectrum-Aware Parameter Efficient Fine-Tuning for Diffusion Models [73.88009808326387]
We propose a novel spectrum-aware adaptation framework for generative models.
Our method adjusts both singular values and their basis vectors of pretrained weights.
We introduce Spectral Ortho Decomposition Adaptation (SODA), which balances computational efficiency and representation capacity.
arXiv Detail & Related papers (2024-05-31T17:43:35Z) - Data-driven Nonlinear Model Reduction using Koopman Theory: Integrated
Control Form and NMPC Case Study [56.283944756315066]
We propose generic model structures combining delay-coordinate encoding of measurements and full-state decoding to integrate reduced Koopman modeling and state estimation.
A case study demonstrates that our approach provides accurate control models and enables real-time capable nonlinear model predictive control of a high-purity cryogenic distillation column.
arXiv Detail & Related papers (2024-01-09T11:54:54Z) - Boosting Inference Efficiency: Unleashing the Power of Parameter-Shared
Pre-trained Language Models [109.06052781040916]
We introduce a technique to enhance the inference efficiency of parameter-shared language models.
We also propose a simple pre-training technique that leads to fully or partially shared models.
Results demonstrate the effectiveness of our methods on both autoregressive and autoencoding PLMs.
arXiv Detail & Related papers (2023-10-19T15:13:58Z) - Conditional Korhunen-Lo\'{e}ve regression model with Basis Adaptation
for high-dimensional problems: uncertainty quantification and inverse
modeling [62.997667081978825]
We propose a methodology for improving the accuracy of surrogate models of the observable response of physical systems.
We apply the proposed methodology to constructing surrogate models via the Basis Adaptation (BA) method of the stationary hydraulic head response.
arXiv Detail & Related papers (2023-07-05T18:14:38Z) - An iterative multi-fidelity approach for model order reduction of
multi-dimensional input parametric PDE systems [0.0]
We propose a sampling parametric strategy for the reduction of large-scale PDE systems with multidimensional input parametric spaces.
It is achieved by exploiting low-fidelity models throughout the parametric space to sample points using an efficient sampling strategy.
Since the proposed methodology leverages the use of low-fidelity models to assimilate the solution database, it significantly reduces the computational cost in the offline stage.
arXiv Detail & Related papers (2023-01-23T15:25:58Z) - On the Influence of Enforcing Model Identifiability on Learning dynamics
of Gaussian Mixture Models [14.759688428864159]
We propose a technique for extracting submodels from singular models.
Our method enforces model identifiability during training.
We show how the method can be applied to more complex models like deep neural networks.
arXiv Detail & Related papers (2022-06-17T07:50:22Z) - gLaSDI: Parametric Physics-informed Greedy Latent Space Dynamics
Identification [0.5249805590164902]
A physics-informed greedy Latent Space Dynamics Identification (gLa) method is proposed for accurate, efficient, and robust data-driven reduced-order modeling.
An interactive training algorithm is adopted for the autoencoder and local DI models, which enables identification of simple latent-space dynamics.
The effectiveness of the proposed framework is demonstrated by modeling various nonlinear dynamical problems.
arXiv Detail & Related papers (2022-04-26T00:15:46Z) - On the Parameter Combinations That Matter and on Those That do Not [0.0]
We present a data-driven approach to characterizing nonidentifiability of a model's parameters.
By employing Diffusion Maps and their extensions, we discover the minimal combinations of parameters required to characterize the dynamic output behavior.
arXiv Detail & Related papers (2021-10-13T13:46:23Z) - MINIMALIST: Mutual INformatIon Maximization for Amortized Likelihood
Inference from Sampled Trajectories [61.3299263929289]
Simulation-based inference enables learning the parameters of a model even when its likelihood cannot be computed in practice.
One class of methods uses data simulated with different parameters to infer an amortized estimator for the likelihood-to-evidence ratio.
We show that this approach can be formulated in terms of mutual information between model parameters and simulated data.
arXiv Detail & Related papers (2021-06-03T12:59:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.