Active learning for data-driven reduced models of parametric differential systems with Bayesian operator inference
- URL: http://arxiv.org/abs/2601.00038v1
- Date: Tue, 30 Dec 2025 19:34:26 GMT
- Title: Active learning for data-driven reduced models of parametric differential systems with Bayesian operator inference
- Authors: Shane A. McQuarrie, Mengwu Guo, Anirban Chaudhuri,
- Abstract summary: This work develops an active learning framework to intelligently enrich data-driven reduced-order models (ROMs) of parametric dynamical systems.<n>Data-driven ROMs are explainable, computationally efficient scientific machine learning models.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This work develops an active learning framework to intelligently enrich data-driven reduced-order models (ROMs) of parametric dynamical systems, which can serve as the foundation of virtual assets in a digital twin. Data-driven ROMs are explainable, computationally efficient scientific machine learning models that aim to preserve the underlying physics of complex dynamical simulations. Since the quality of data-driven ROMs is sensitive to the quality of the limited training data, we seek to identify training parameters for which using the associated training data results in the best possible parametric ROM. Our approach uses the operator inference methodology, a regression-based strategy which can be tailored to particular parametric structure for a large class of problems. We establish a probabilistic version of parametric operator inference, casting the learning problem as a Bayesian linear regression. Prediction uncertainties stemming from the resulting probabilistic ROM solutions are used to design a sequential adaptive sampling scheme to select new training parameter vectors that promote ROM stability and accuracy globally in the parameter domain. We conduct numerical experiments for several nonlinear parametric systems of partial differential equations and compare the results to ROMs trained on random parameter samples. The results demonstrate that the proposed adaptive sampling strategy consistently yields more stable and accurate ROMs than random sampling does under the same computational budget.
Related papers
- Profiling systematic uncertainties in Simulation-Based Inference with Factorizable Normalizing Flows [0.0]
We propose a general framework for Simulation-Based Inference that efficiently profiles nuisance parameters.<n>We introduce Factorizable Normalizing Flows to model systematic variations as a parametrics of a nominal density.<n>We develop an amortized training strategy that learns the conditional dependence of the DoI on nuisance parameters in a single optimization process.<n>This allows for the simultaneous extraction of the underlying distribution and the robust profiling of nuisances.
arXiv Detail & Related papers (2026-02-13T18:48:12Z) - Data-driven stochastic reduced-order modeling of parametrized dynamical systems [3.5684665108045377]
We introduce a data-driven framework for learning continuous-time ROMs that generalize across parameter spaces and forcing conditions.<n>We demonstrate excellent generalization to unseen parameter combinations and forcings, and significant efficiency gains compared to existing approaches.
arXiv Detail & Related papers (2026-01-15T18:50:18Z) - Disordered Dynamics in High Dimensions: Connections to Random Matrices and Machine Learning [52.26396748560348]
We provide an overview of high dimensional dynamical systems driven by random matrices.<n>We focus on applications to simple models of learning and generalization in machine learning theory.
arXiv Detail & Related papers (2026-01-03T00:12:32Z) - Weak Form Scientific Machine Learning: Test Function Construction for System Identification [0.0]
Weak form Scientific Machine Learning (WSciML) is a recently developed framework for data-driven modeling and scientific discovery.<n>We mathematically motivate a novel data-driven method for constructing Single-scale-Local reference functions for creating the set of test functions.<n>Our approach numerically approximates the integration error introduced by the quadrature and identifies the support size for which the error is minimal.
arXiv Detail & Related papers (2025-07-03T22:36:34Z) - Diffusion posterior sampling for simulation-based inference in tall data settings [53.17563688225137]
Simulation-based inference ( SBI) is capable of approximating the posterior distribution that relates input parameters to a given observation.
In this work, we consider a tall data extension in which multiple observations are available to better infer the parameters of the model.
We compare our method to recently proposed competing approaches on various numerical experiments and demonstrate its superiority in terms of numerical stability and computational cost.
arXiv Detail & Related papers (2024-04-11T09:23:36Z) - Parametric Learning of Time-Advancement Operators for Unstable Flame
Evolution [0.0]
This study investigates the application of machine learning to learn time-advancement operators for parametric partial differential equations (PDEs)
Our focus is on extending existing operator learning methods to handle additional inputs representing PDE parameters.
The goal is to create a unified learning approach that accurately predicts short-term solutions and provides robust long-term statistics.
arXiv Detail & Related papers (2024-02-14T18:12:42Z) - Online Variational Sequential Monte Carlo [49.97673761305336]
We build upon the variational sequential Monte Carlo (VSMC) method, which provides computationally efficient and accurate model parameter estimation and Bayesian latent-state inference.
Online VSMC is capable of performing efficiently, entirely on-the-fly, both parameter estimation and particle proposal adaptation.
arXiv Detail & Related papers (2023-12-19T21:45:38Z) - Capturing dynamical correlations using implicit neural representations [85.66456606776552]
We develop an artificial intelligence framework which combines a neural network trained to mimic simulated data from a model Hamiltonian with automatic differentiation to recover unknown parameters from experimental data.
In doing so, we illustrate the ability to build and train a differentiable model only once, which then can be applied in real-time to multi-dimensional scattering data.
arXiv Detail & Related papers (2023-04-08T07:55:36Z) - A Causality-Based Learning Approach for Discovering the Underlying
Dynamics of Complex Systems from Partial Observations with Stochastic
Parameterization [1.2882319878552302]
This paper develops a new iterative learning algorithm for complex turbulent systems with partial observations.
It alternates between identifying model structures, recovering unobserved variables, and estimating parameters.
Numerical experiments show that the new algorithm succeeds in identifying the model structure and providing suitable parameterizations for many complex nonlinear systems.
arXiv Detail & Related papers (2022-08-19T00:35:03Z) - Mixed Effects Neural ODE: A Variational Approximation for Analyzing the
Dynamics of Panel Data [50.23363975709122]
We propose a probabilistic model called ME-NODE to incorporate (fixed + random) mixed effects for analyzing panel data.
We show that our model can be derived using smooth approximations of SDEs provided by the Wong-Zakai theorem.
We then derive Evidence Based Lower Bounds for ME-NODE, and develop (efficient) training algorithms.
arXiv Detail & Related papers (2022-02-18T22:41:51Z) - MINIMALIST: Mutual INformatIon Maximization for Amortized Likelihood
Inference from Sampled Trajectories [61.3299263929289]
Simulation-based inference enables learning the parameters of a model even when its likelihood cannot be computed in practice.
One class of methods uses data simulated with different parameters to infer an amortized estimator for the likelihood-to-evidence ratio.
We show that this approach can be formulated in terms of mutual information between model parameters and simulated data.
arXiv Detail & Related papers (2021-06-03T12:59:16Z) - A Variational Infinite Mixture for Probabilistic Inverse Dynamics
Learning [34.90240171916858]
We develop an efficient variational Bayes inference technique for infinite mixtures of probabilistic local models.
We highlight the model's power in combining data-driven adaptation, fast prediction and the ability to deal with discontinuous functions and heteroscedastic noise.
We use the learned models for online dynamics control of a Barrett-WAM manipulator, significantly improving the trajectory tracking performance.
arXiv Detail & Related papers (2020-11-10T16:15:13Z) - DISCO: Double Likelihood-free Inference Stochastic Control [29.84276469617019]
We propose to leverage the power of modern simulators and recent techniques in Bayesian statistics for likelihood-free inference.
The posterior distribution over simulation parameters is propagated through a potentially non-analytical model of the system.
Experiments show that the controller proposed attained superior performance and robustness on classical control and robotics tasks.
arXiv Detail & Related papers (2020-02-18T05:29:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.