A TVD neural network closure and application to turbulent combustion
- URL: http://arxiv.org/abs/2408.03413v1
- Date: Tue, 6 Aug 2024 19:22:13 GMT
- Title: A TVD neural network closure and application to turbulent combustion
- Authors: Seung Won Suh, Jonathan F MacArt, Luke N Olson, Jonathan B Freund,
- Abstract summary: Trained neural networks (NN) have attractive features for closing governing equations, but they can stray from physical reality.
A NN formulation is introduced to preclude spurious oscillations that violate solution boundedness or positivity.
It is embedded in the discretized equations as a machine learning closure and strictly constrained.
- Score: 1.374949083138427
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Trained neural networks (NN) have attractive features for closing governing equations, but in the absence of additional constraints, they can stray from physical reality. A NN formulation is introduced to preclude spurious oscillations that violate solution boundedness or positivity. It is embedded in the discretized equations as a machine learning closure and strictly constrained, inspired by total variation diminishing (TVD) methods for hyperbolic conservation laws. The constraint is exactly enforced during gradient-descent training by rescaling the NN parameters, which maps them onto an explicit feasible set. Demonstrations show that the constrained NN closure model usefully recovers linear and nonlinear hyperbolic phenomena and anti-diffusion while enforcing the non-oscillatory property. Finally, the model is applied to subgrid-scale (SGS) modeling of a turbulent reacting flow, for which it suppresses spurious oscillations in scalar fields that otherwise violate the solution boundedness. It outperforms a simple penalization of oscillations in the loss function.
Related papers
- Exact dynamics of quantum dissipative $XX$ models: Wannier-Stark localization in the fragmented operator space [49.1574468325115]
We find an exceptional point at a critical dissipation strength that separates oscillating and non-oscillating decay.
We also describe a different type of dissipation that leads to a single decay mode in the whole operator subspace.
arXiv Detail & Related papers (2024-05-27T16:11:39Z) - Stabilized Neural Differential Equations for Learning Dynamics with
Explicit Constraints [4.656302602746229]
We propose stabilized neural differential equations (SNDEs) to enforce arbitrary manifold constraints for neural differential equations.
Our approach is based on a stabilization term that, when added to the original dynamics, renders the constraint manifold provably stable.
Due to its simplicity, our method is compatible with all common neural differential equation (NDE) models and broadly applicable.
arXiv Detail & Related papers (2023-06-16T10:16:59Z) - Neural Abstractions [72.42530499990028]
We present a novel method for the safety verification of nonlinear dynamical models that uses neural networks to represent abstractions of their dynamics.
We demonstrate that our approach performs comparably to the mature tool Flow* on existing benchmark nonlinear models.
arXiv Detail & Related papers (2023-01-27T12:38:09Z) - A Functional-Space Mean-Field Theory of Partially-Trained Three-Layer
Neural Networks [49.870593940818715]
We study the infinite-width limit of a type of three-layer NN model whose first layer is random and fixed.
Our theory accommodates different scaling choices of the model, resulting in two regimes of the MF limit that demonstrate distinctive behaviors.
arXiv Detail & Related papers (2022-10-28T17:26:27Z) - Optimization-Induced Graph Implicit Nonlinear Diffusion [64.39772634635273]
We propose a new kind of graph convolution variants, called Graph Implicit Diffusion (GIND)
GIND implicitly has access to infinite hops of neighbors while adaptively aggregating features with nonlinear diffusion to prevent over-smoothing.
We show that the learned representation can be formalized as the minimizer of an explicit convex optimization objective.
arXiv Detail & Related papers (2022-06-29T06:26:42Z) - On Robust Classification using Contractive Hamiltonian Neural ODEs [8.049462923912902]
We employ contraction theory to improve robustness of neural ODEs (NODEs)
In NODEs, the input data corresponds to the initial condition of dynamical systems.
We propose a class of contractive Hamiltonian NODEs (CH-NODEs)
arXiv Detail & Related papers (2022-03-22T15:16:36Z) - Decimation technique for open quantum systems: a case study with
driven-dissipative bosonic chains [62.997667081978825]
Unavoidable coupling of quantum systems to external degrees of freedom leads to dissipative (non-unitary) dynamics.
We introduce a method to deal with these systems based on the calculation of (dissipative) lattice Green's function.
We illustrate the power of this method with several examples of driven-dissipative bosonic chains of increasing complexity.
arXiv Detail & Related papers (2022-02-15T19:00:09Z) - Least-Squares Neural Network (LSNN) Method For Scalar Nonlinear
Hyperbolic Conservation Laws: Discrete Divergence Operator [4.3226069572849966]
A least-squares neural network (LSNN) method was introduced for solving scalar linear hyperbolic conservation laws.
This paper rewrites HCLs in their divergence form of space time time introduces a new discrete divergence operator.
arXiv Detail & Related papers (2021-10-21T04:50:57Z) - The Limiting Dynamics of SGD: Modified Loss, Phase Space Oscillations,
and Anomalous Diffusion [29.489737359897312]
We study the limiting dynamics of deep neural networks trained with gradient descent (SGD)
We show that the key ingredient driving these dynamics is not the original training loss, but rather the combination of a modified loss, which implicitly regularizes the velocity and probability currents, which cause oscillations in phase space.
arXiv Detail & Related papers (2021-07-19T20:18:57Z) - Robust Implicit Networks via Non-Euclidean Contractions [63.91638306025768]
Implicit neural networks show improved accuracy and significant reduction in memory consumption.
They can suffer from ill-posedness and convergence instability.
This paper provides a new framework to design well-posed and robust implicit neural networks.
arXiv Detail & Related papers (2021-06-06T18:05:02Z) - Least-Squares ReLU Neural Network (LSNN) Method For Scalar Nonlinear
Hyperbolic Conservation Law [3.6525914200522656]
We introduce the least-squares ReLU neural network (LSNN) method for solving the linear advection-reaction problem with discontinuous solution.
We show that the method outperforms mesh-based numerical methods in terms of the number of degrees of freedom.
arXiv Detail & Related papers (2021-05-25T02:59:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.