A TVD neural network closure and application to turbulent combustion
- URL: http://arxiv.org/abs/2408.03413v2
- Date: Thu, 21 Nov 2024 16:52:11 GMT
- Title: A TVD neural network closure and application to turbulent combustion
- Authors: Seung Won Suh, Jonathan F MacArt, Luke N Olson, Jonathan B Freund,
- Abstract summary: Trained neural networks (NN) have attractive features for closing governing equations.
A NN formulation is introduced to preclude spurious oscillations that violate solution boundedness or positivity.
It is embedded in the discretized equations as a machine learning closure and strictly constrained.
- Score: 1.374949083138427
- License:
- Abstract: Trained neural networks (NN) have attractive features for closing governing equations. There are many methods that are showing promise, but all can fail in cases when small errors consequentially violate physical reality, such as a solution boundedness condition. A NN formulation is introduced to preclude spurious oscillations that violate solution boundedness or positivity. It is embedded in the discretized equations as a machine learning closure and strictly constrained, inspired by total variation diminishing (TVD) methods for hyperbolic conservation laws. The constraint is exactly enforced during gradient-descent training by rescaling the NN parameters, which maps them onto an explicit feasible set. Demonstrations show that the constrained NN closure model usefully recovers linear and nonlinear hyperbolic phenomena and anti-diffusion while enforcing the non-oscillatory property. Finally, the model is applied to subgrid-scale (SGS) modeling of a turbulent reacting flow, for which it suppresses spurious oscillations in scalar fields that otherwise violate the solution boundedness. It outperforms a simple penalization of oscillations in the loss function.
Related papers
- Coupled Integral PINN for conservation law [1.9720482348156743]
The Physics-Informed Neural Network (PINN) is an innovative approach to solve a diverse array of partial differential equations.
This paper introduces a novel Coupled Integrated PINN methodology that involves fitting the integral solutions equations using additional neural networks.
arXiv Detail & Related papers (2024-11-18T04:32:42Z) - FEM-based Neural Networks for Solving Incompressible Fluid Flows and Related Inverse Problems [41.94295877935867]
numerical simulation and optimization of technical systems described by partial differential equations is expensive.
A comparatively new approach in this context is to combine the good approximation properties of neural networks with the classical finite element method.
In this paper, we extend this approach to saddle-point and non-linear fluid dynamics problems, respectively.
arXiv Detail & Related papers (2024-09-06T07:17:01Z) - Beyond Closure Models: Learning Chaotic-Systems via Physics-Informed Neural Operators [78.64101336150419]
Predicting the long-term behavior of chaotic systems is crucial for various applications such as climate modeling.
An alternative approach to such a full-resolved simulation is using a coarse grid and then correcting its errors through a temporalittext model.
We propose an alternative end-to-end learning approach using a physics-informed neural operator (PINO) that overcomes this limitation.
arXiv Detail & Related papers (2024-08-09T17:05:45Z) - Exact dynamics of quantum dissipative $XX$ models: Wannier-Stark localization in the fragmented operator space [49.1574468325115]
We find an exceptional point at a critical dissipation strength that separates oscillating and non-oscillating decay.
We also describe a different type of dissipation that leads to a single decay mode in the whole operator subspace.
arXiv Detail & Related papers (2024-05-27T16:11:39Z) - Neural Abstractions [72.42530499990028]
We present a novel method for the safety verification of nonlinear dynamical models that uses neural networks to represent abstractions of their dynamics.
We demonstrate that our approach performs comparably to the mature tool Flow* on existing benchmark nonlinear models.
arXiv Detail & Related papers (2023-01-27T12:38:09Z) - Physics-Informed Neural Network Method for Parabolic Differential
Equations with Sharply Perturbed Initial Conditions [68.8204255655161]
We develop a physics-informed neural network (PINN) model for parabolic problems with a sharply perturbed initial condition.
Localized large gradients in the ADE solution make the (common in PINN) Latin hypercube sampling of the equation's residual highly inefficient.
We propose criteria for weights in the loss function that produce a more accurate PINN solution than those obtained with the weights selected via other methods.
arXiv Detail & Related papers (2022-08-18T05:00:24Z) - On Robust Classification using Contractive Hamiltonian Neural ODEs [8.049462923912902]
We employ contraction theory to improve robustness of neural ODEs (NODEs)
In NODEs, the input data corresponds to the initial condition of dynamical systems.
We propose a class of contractive Hamiltonian NODEs (CH-NODEs)
arXiv Detail & Related papers (2022-03-22T15:16:36Z) - Decimation technique for open quantum systems: a case study with
driven-dissipative bosonic chains [62.997667081978825]
Unavoidable coupling of quantum systems to external degrees of freedom leads to dissipative (non-unitary) dynamics.
We introduce a method to deal with these systems based on the calculation of (dissipative) lattice Green's function.
We illustrate the power of this method with several examples of driven-dissipative bosonic chains of increasing complexity.
arXiv Detail & Related papers (2022-02-15T19:00:09Z) - Least-Squares Neural Network (LSNN) Method For Scalar Nonlinear
Hyperbolic Conservation Laws: Discrete Divergence Operator [4.3226069572849966]
A least-squares neural network (LSNN) method was introduced for solving scalar linear hyperbolic conservation laws.
This paper rewrites HCLs in their divergence form of space time time introduces a new discrete divergence operator.
arXiv Detail & Related papers (2021-10-21T04:50:57Z) - Robust Implicit Networks via Non-Euclidean Contractions [63.91638306025768]
Implicit neural networks show improved accuracy and significant reduction in memory consumption.
They can suffer from ill-posedness and convergence instability.
This paper provides a new framework to design well-posed and robust implicit neural networks.
arXiv Detail & Related papers (2021-06-06T18:05:02Z) - Least-Squares ReLU Neural Network (LSNN) Method For Scalar Nonlinear
Hyperbolic Conservation Law [3.6525914200522656]
We introduce the least-squares ReLU neural network (LSNN) method for solving the linear advection-reaction problem with discontinuous solution.
We show that the method outperforms mesh-based numerical methods in terms of the number of degrees of freedom.
arXiv Detail & Related papers (2021-05-25T02:59:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.