A note on the error analysis of data-driven closure models for large eddy simulations of turbulence
- URL: http://arxiv.org/abs/2405.17612v2
- Date: Wed, 29 May 2024 19:39:12 GMT
- Title: A note on the error analysis of data-driven closure models for large eddy simulations of turbulence
- Authors: Dibyajyoti Chakraborty, Shivam Barwey, Hong Zhang, Romit Maulik,
- Abstract summary: We provide a mathematical formulation for error propagation in flow trajectory prediction using data-driven turbulence closure modeling.
We retrieve an upper bound for the prediction error when utilizing a data-driven closure model.
Our analysis also shows that the error propagates exponentially with rollout time and the upper bound of the system Jacobian.
- Score: 2.4548283109365436
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we provide a mathematical formulation for error propagation in flow trajectory prediction using data-driven turbulence closure modeling. Under the assumption that the predicted state of a large eddy simulation prediction must be close to that of a subsampled direct numerical simulation, we retrieve an upper bound for the prediction error when utilizing a data-driven closure model. We also demonstrate that this error is significantly affected by the time step size and the Jacobian which play a role in amplifying the initial one-step error made by using the closure. Our analysis also shows that the error propagates exponentially with rollout time and the upper bound of the system Jacobian which is itself influenced by the Jacobian of the closure formulation. These findings could enable the development of new regularization techniques for ML models based on the identified error-bound terms, improving their robustness and reducing error propagation.
Related papers
- Embedded Nonlocal Operator Regression (ENOR): Quantifying model error in learning nonlocal operators [8.585650361148558]
We propose a new framework to learn a nonlocal homogenized surrogate model and its structural model error.
This framework provides discrepancy-adaptive uncertainty quantification for homogenized material response predictions in long-term simulations.
arXiv Detail & Related papers (2024-10-27T04:17:27Z) - Influence Functions for Scalable Data Attribution in Diffusion Models [52.92223039302037]
Diffusion models have led to significant advancements in generative modelling.
Yet their widespread adoption poses challenges regarding data attribution and interpretability.
In this paper, we aim to help address such challenges by developing an textitinfluence functions framework.
arXiv Detail & Related papers (2024-10-17T17:59:02Z) - Amortizing intractable inference in diffusion models for vision, language, and control [89.65631572949702]
This paper studies amortized sampling of the posterior over data, $mathbfxsim prm post(mathbfx)propto p(mathbfx)r(mathbfx)$, in a model that consists of a diffusion generative model prior $p(mathbfx)$ and a black-box constraint or function $r(mathbfx)$.
We prove the correctness of a data-free learning objective, relative trajectory balance, for training a diffusion model that samples from
arXiv Detail & Related papers (2024-05-31T16:18:46Z) - Diffusion posterior sampling for simulation-based inference in tall data settings [53.17563688225137]
Simulation-based inference ( SBI) is capable of approximating the posterior distribution that relates input parameters to a given observation.
In this work, we consider a tall data extension in which multiple observations are available to better infer the parameters of the model.
We compare our method to recently proposed competing approaches on various numerical experiments and demonstrate its superiority in terms of numerical stability and computational cost.
arXiv Detail & Related papers (2024-04-11T09:23:36Z) - Towards Theoretical Understandings of Self-Consuming Generative Models [56.84592466204185]
This paper tackles the emerging challenge of training generative models within a self-consuming loop.
We construct a theoretical framework to rigorously evaluate how this training procedure impacts the data distributions learned by future models.
We present results for kernel density estimation, delivering nuanced insights such as the impact of mixed data training on error propagation.
arXiv Detail & Related papers (2024-02-19T02:08:09Z) - Parameter uncertainties for imperfect surrogate models in the low-noise regime [0.3069335774032178]
We analyze the generalization error of misspecified, near-deterministic surrogate models.
We show posterior distributions must cover every training point to avoid a divergent generalization error.
This is demonstrated on model problems before application to thousand dimensional datasets in atomistic machine learning.
arXiv Detail & Related papers (2024-02-02T11:41:21Z) - On Error Propagation of Diffusion Models [77.91480554418048]
We develop a theoretical framework to mathematically formulate error propagation in the architecture of DMs.
We apply the cumulative error as a regularization term to reduce error propagation.
Our proposed regularization reduces error propagation, significantly improves vanilla DMs, and outperforms previous baselines.
arXiv Detail & Related papers (2023-08-09T15:31:17Z) - Episodic Gaussian Process-Based Learning Control with Vanishing Tracking
Errors [10.627020714408445]
We develop an episodic approach for learning GP models, such that an arbitrary tracking accuracy can be guaranteed.
The effectiveness of the derived theory is demonstrated in several simulations.
arXiv Detail & Related papers (2023-07-10T08:43:28Z) - Online machine-learning forecast uncertainty estimation for sequential
data assimilation [0.0]
Quantifying forecast uncertainty is a key aspect of state-of-the-art numerical weather prediction and data assimilation systems.
In this work a machine learning method is presented based on convolutional neural networks that estimates the state-dependent forecast uncertainty.
The hybrid data assimilation method shows similar performance to the ensemble Kalman filter outperforming it when the ensembles are relatively small.
arXiv Detail & Related papers (2023-05-12T19:23:21Z) - Deep Learning to advance the Eigenspace Perturbation Method for
Turbulence Model Uncertainty Quantification [0.0]
We outline a machine learning approach to aid the use of the Eigenspace Perturbation Method to predict the uncertainty in the turbulence model prediction.
We use a trained neural network to predict the discrepancy in the shape of the RANS predicted Reynolds stress ellipsoid.
arXiv Detail & Related papers (2022-02-11T08:06:52Z) - Good Classifiers are Abundant in the Interpolating Regime [64.72044662855612]
We develop a methodology to compute precisely the full distribution of test errors among interpolating classifiers.
We find that test errors tend to concentrate around a small typical value $varepsilon*$, which deviates substantially from the test error of worst-case interpolating model.
Our results show that the usual style of analysis in statistical learning theory may not be fine-grained enough to capture the good generalization performance observed in practice.
arXiv Detail & Related papers (2020-06-22T21:12:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.