Continuous Methods : Adaptively intrusive reduced order model closure
- URL: http://arxiv.org/abs/2211.16999v1
- Date: Wed, 30 Nov 2022 13:55:34 GMT
- Title: Continuous Methods : Adaptively intrusive reduced order model closure
- Authors: Emmanuel Menier (LISN, TAU), Michele Alessandro Bucci (TAU), Mouadh
Yagoubi, Lionel Mathelin (LISN), Thibault Dairay, Raphael Meunier, Marc
Schoenauer (TAU)
- Abstract summary: We propose a novel ROM correction approach based on a time-continuous memory formulation.
Our proposed method provides a high level of accuracy while retaining the low computational costs.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reduced order modeling methods are often used as a mean to reduce simulation
costs in industrial applications. Despite their computational advantages,
reduced order models (ROMs) often fail to accurately reproduce complex dynamics
encountered in real life applications. To address this challenge, we leverage
NeuralODEs to propose a novel ROM correction approach based on a
time-continuous memory formulation. Finally, experimental results show that our
proposed method provides a high level of accuracy while retaining the low
computational costs inherent to reduced models.
Related papers
- Towards Learning Stochastic Population Models by Gradient Descent [0.0]
We show that simultaneous estimation of parameters and structure poses major challenges for optimization procedures.
We demonstrate accurate estimation of models but find that enforcing the inference of parsimonious, interpretable models drastically increases the difficulty.
arXiv Detail & Related papers (2024-04-10T14:38:58Z) - Multi-fidelity reduced-order surrogate modeling [5.346062841242067]
We present a new data-driven strategy that combines dimensionality reduction with multi-fidelity neural network surrogates.
We show that the onset of instabilities and transients are well captured by this surrogate technique.
arXiv Detail & Related papers (2023-09-01T08:16:53Z) - Leaving the Nest: Going Beyond Local Loss Functions for
Predict-Then-Optimize [57.22851616806617]
We show that our method achieves state-of-the-art results in four domains from the literature.
Our approach outperforms the best existing method by nearly 200% when the localness assumption is broken.
arXiv Detail & Related papers (2023-05-26T11:17:45Z) - Conditional Denoising Diffusion for Sequential Recommendation [62.127862728308045]
Two prominent generative models, Generative Adversarial Networks (GANs) and Variational AutoEncoders (VAEs)
GANs suffer from unstable optimization, while VAEs are prone to posterior collapse and over-smoothed generations.
We present a conditional denoising diffusion model, which includes a sequence encoder, a cross-attentive denoising decoder, and a step-wise diffuser.
arXiv Detail & Related papers (2023-04-22T15:32:59Z) - Principled Pruning of Bayesian Neural Networks through Variational Free
Energy Minimization [2.3999111269325266]
We formulate and apply Bayesian model reduction to perform principled pruning of Bayesian neural networks.
A novel iterative pruning algorithm is presented to alleviate the problems arising with naive Bayesian model reduction.
Our experiments indicate better model performance in comparison to state-of-the-art pruning schemes.
arXiv Detail & Related papers (2022-10-17T14:34:42Z) - When to Update Your Model: Constrained Model-based Reinforcement
Learning [50.74369835934703]
We propose a novel and general theoretical scheme for a non-decreasing performance guarantee of model-based RL (MBRL)
Our follow-up derived bounds reveal the relationship between model shifts and performance improvement.
A further example demonstrates that learning models from a dynamically-varying number of explorations benefit the eventual returns.
arXiv Detail & Related papers (2022-10-15T17:57:43Z) - An Accelerated Doubly Stochastic Gradient Method with Faster Explicit
Model Identification [97.28167655721766]
We propose a novel doubly accelerated gradient descent (ADSGD) method for sparsity regularized loss minimization problems.
We first prove that ADSGD can achieve a linear convergence rate and lower overall computational complexity.
arXiv Detail & Related papers (2022-08-11T22:27:22Z) - Predictive machine learning for prescriptive applications: a coupled
training-validating approach [77.34726150561087]
We propose a new method for training predictive machine learning models for prescriptive applications.
This approach is based on tweaking the validation step in the standard training-validating-testing scheme.
Several experiments with synthetic data demonstrate promising results in reducing the prescription costs in both deterministic and real models.
arXiv Detail & Related papers (2021-10-22T15:03:20Z) - Non-intrusive Nonlinear Model Reduction via Machine Learning
Approximations to Low-dimensional Operators [0.0]
We propose a method that enables traditionally intrusive reduced-order models to be accurately approximated in a non-intrusive manner.
The approach approximates the low-dimensional operators associated with projection-based reduced-order models (ROMs) using modern machine-learning regression techniques.
In addition to enabling nonintrusivity, we demonstrate that the approach also leads to very low computational complexity, achieving up to $1000times$ reduction in run time.
arXiv Detail & Related papers (2021-06-17T17:04:42Z) - Neural Closure Models for Dynamical Systems [35.000303827255024]
We develop a novel methodology to learn non-Markovian closure parameterizations for low-fidelity models.
New "neural closure models" augment low-fidelity models with neural delay differential equations (nDDEs)
We show that using non-Markovian over Markovian closures improves long-term accuracy and requires smaller networks.
arXiv Detail & Related papers (2020-12-27T05:55:33Z) - Extrapolation for Large-batch Training in Deep Learning [72.61259487233214]
We show that a host of variations can be covered in a unified framework that we propose.
We prove the convergence of this novel scheme and rigorously evaluate its empirical performance on ResNet, LSTM, and Transformer.
arXiv Detail & Related papers (2020-06-10T08:22:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.