Assessment of machine learning methods for state-to-state approaches
- URL: http://arxiv.org/abs/2104.01042v1
- Date: Fri, 2 Apr 2021 13:27:23 GMT
- Title: Assessment of machine learning methods for state-to-state approaches
- Authors: Lorenzo Campoli, Elena Kustova, Polina Maltseva
- Abstract summary: We investigate the possibilities offered by the use of machine learning methods for state-to-state approaches.
Deep neural networks appear to be a viable technology also for these tasks.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: It is well known that numerical simulations of high-speed reacting flows, in
the framework of state-to-state formulations, are the most detailed but also
often prohibitively computationally expensive. In this work, we start to
investigate the possibilities offered by the use of machine learning methods
for state-to-state approaches to alleviate such burden.
In this regard, several tasks have been identified. Firstly, we assessed the
potential of state-of-the-art data-driven regression models based on machine
learning to predict the relaxation source terms which appear in the right-hand
side of the state-to-state Euler system of equations for a one-dimensional
reacting flow of a N$_2$/N binary mixture behind a plane shock wave. It is
found that, by appropriately choosing the regressor and opportunely tuning its
hyperparameters, it is possible to achieve accurate predictions compared to the
full-scale state-to-state simulation in significantly shorter times.
Secondly, we investigated different strategies to speed-up our in-house
state-to-state solver by coupling it with the best-performing pre-trained
machine learning algorithm. The embedding of machine learning methods into
ordinary differential equations solvers may offer a speed-up of several orders
of magnitude but some care should be paid for how and where such coupling is
realized. Performances are found to be strongly dependent on the mutual nature
of the interfaced codes.
Finally, we aimed at inferring the full solution of the state-to-state Euler
system of equations by means of a deep neural network completely by-passing the
use of the state-to-state solver while relying only on data. Promising results
suggest that deep neural networks appear to be a viable technology also for
these tasks.
Related papers
- Predicting Probabilities of Error to Combine Quantization and Early Exiting: QuEE [68.6018458996143]
We propose a more general dynamic network that can combine both quantization and early exit dynamic network: QuEE.
Our algorithm can be seen as a form of soft early exiting or input-dependent compression.
The crucial factor of our approach is accurate prediction of the potential accuracy improvement achievable through further computation.
arXiv Detail & Related papers (2024-06-20T15:25:13Z) - NeuralStagger: Accelerating Physics-constrained Neural PDE Solver with
Spatial-temporal Decomposition [67.46012350241969]
This paper proposes a general acceleration methodology called NeuralStagger.
It decomposing the original learning tasks into several coarser-resolution subtasks.
We demonstrate the successful application of NeuralStagger on 2D and 3D fluid dynamics simulations.
arXiv Detail & Related papers (2023-02-20T19:36:52Z) - Mixed formulation of physics-informed neural networks for
thermo-mechanically coupled systems and heterogeneous domains [0.0]
Physics-informed neural networks (PINNs) are a new tool for solving boundary value problems.
Recent investigations have shown that when designing loss functions for many engineering problems, using first-order derivatives and combining equations from both strong and weak forms can lead to much better accuracy.
In this work, we propose applying the mixed formulation to solve multi-physical problems, specifically a stationary thermo-mechanically coupled system of equations.
arXiv Detail & Related papers (2023-02-09T21:56:59Z) - On Robust Numerical Solver for ODE via Self-Attention Mechanism [82.95493796476767]
We explore training efficient and robust AI-enhanced numerical solvers with a small data size by mitigating intrinsic noise disturbances.
We first analyze the ability of the self-attention mechanism to regulate noise in supervised learning and then propose a simple-yet-effective numerical solver, Attr, which introduces an additive self-attention mechanism to the numerical solution of differential equations.
arXiv Detail & Related papers (2023-02-05T01:39:21Z) - Learning neural state-space models: do we need a state estimator? [0.0]
We provide insights for calibration of neural state-space training algorithms based on extensive experimentation and analyses.
Specific focus is given to the choice and the role of the initial state estimation.
We demonstrate that advanced initial state estimation techniques are really required to achieve high performance on certain classes of dynamical systems.
arXiv Detail & Related papers (2022-06-26T17:15:35Z) - Training Feedback Spiking Neural Networks by Implicit Differentiation on
the Equilibrium State [66.2457134675891]
Spiking neural networks (SNNs) are brain-inspired models that enable energy-efficient implementation on neuromorphic hardware.
Most existing methods imitate the backpropagation framework and feedforward architectures for artificial neural networks.
We propose a novel training method that does not rely on the exact reverse of the forward computation.
arXiv Detail & Related papers (2021-09-29T07:46:54Z) - DeepPhysics: a physics aware deep learning framework for real-time
simulation [0.0]
We propose a solution to simulate hyper-elastic materials using a data-driven approach.
A neural network is trained to learn the non-linear relationship between boundary conditions and the resulting displacement field.
The results show that our network architecture trained with a limited amount of data can predict the displacement field in less than a millisecond.
arXiv Detail & Related papers (2021-09-17T12:15:47Z) - Long-time integration of parametric evolution equations with
physics-informed DeepONets [0.0]
We introduce an effective framework for learning infinite-dimensional operators that map random initial conditions to associated PDE solutions within a short time interval.
Global long-time predictions across a range of initial conditions can be then obtained by iteratively evaluating the trained model.
This introduces a new approach to temporal domain decomposition that is shown to be effective in performing accurate long-time simulations.
arXiv Detail & Related papers (2021-06-09T20:46:17Z) - Using Data Assimilation to Train a Hybrid Forecast System that Combines
Machine-Learning and Knowledge-Based Components [52.77024349608834]
We consider the problem of data-assisted forecasting of chaotic dynamical systems when the available data is noisy partial measurements.
We show that by using partial measurements of the state of the dynamical system, we can train a machine learning model to improve predictions made by an imperfect knowledge-based model.
arXiv Detail & Related papers (2021-02-15T19:56:48Z) - Fast Modeling and Understanding Fluid Dynamics Systems with
Encoder-Decoder Networks [0.0]
We show that an accurate deep-learning-based proxy model can be taught efficiently by a finite-volume-based simulator.
Compared to traditional simulation, the proposed deep learning approach enables much faster forward computation.
We quantify the sensitivity of the deep learning model to key physical parameters and hence demonstrate that the inversion problems can be solved with great acceleration.
arXiv Detail & Related papers (2020-06-09T17:14:08Z) - One-step regression and classification with crosspoint resistive memory
arrays [62.997667081978825]
High speed, low energy computing machines are in demand to enable real-time artificial intelligence at the edge.
One-step learning is supported by simulations of the prediction of the cost of a house in Boston and the training of a 2-layer neural network for MNIST digit recognition.
Results are all obtained in one computational step, thanks to the physical, parallel, and analog computing within the crosspoint array.
arXiv Detail & Related papers (2020-05-05T08:00:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.