Conservative approximation-based feedforward neural network for WENO schemes
- URL: http://arxiv.org/abs/2507.06190v1
- Date: Tue, 08 Jul 2025 17:19:48 GMT
- Title: Conservative approximation-based feedforward neural network for WENO schemes
- Authors: Kwanghyuk Park, Jiaxi Gu, Jae-Hun Jung,
- Abstract summary: We present the feedforward neural network based on the conservative approximation to the derivative from point values.<n>We present WENO3-CADNNs, where they outperform WENO3-Z and achieve accuracy comparable to WENO5-JS.
- Score: 4.867849275247251
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In this work, we present the feedforward neural network based on the conservative approximation to the derivative from point values, for the weighted essentially non-oscillatory (WENO) schemes in solving hyperbolic conservation laws. The feedforward neural network, whose inputs are point values from the three-point stencil and outputs are two nonlinear weights, takes the place of the classical WENO weighting procedure. For the training phase, we employ the supervised learning and create a new labeled dataset for one-dimensional conservative approximation, where we construct a numerical flux function from the given point values such that the flux difference approximates the derivative to high-order accuracy. The symmetric-balancing term is introduced for the loss function so that it propels the neural network to match the conservative approximation to the derivative and satisfy the symmetric property that WENO3-JS and WENO3-Z have in common. The consequent WENO schemes, WENO3-CADNNs, demonstrate robust generalization across various benchmark scenarios and resolutions, where they outperform WENO3-Z and achieve accuracy comparable to WENO5-JS.
Related papers
- Convolution-weighting method for the physics-informed neural network: A Primal-Dual Optimization Perspective [14.65008276932511]
Physics-informed neural networks (PINNs) are extensively employed to solve partial differential equations (PDEs)<n>PINNs are typically optimized using a finite set of points, which poses significant challenges in guaranteeing their convergence and accuracy.<n>We propose a new weighting scheme that will adaptively change the weights to the loss functions from isolated points to their continuous neighborhood regions.
arXiv Detail & Related papers (2025-06-24T17:13:51Z) - Rational-WENO: A lightweight, physically-consistent three-point weighted essentially non-oscillatory scheme [14.120671138290104]
We employ a rational neural network to accurately estimate the local smoothness of the solution.
This approach achieves a granular reconstruction with significantly reduced dissipation.
We demonstrate the effectiveness of our approach on several one-, two-, and three-dimensional fluid flow problems.
arXiv Detail & Related papers (2024-09-13T22:11:03Z) - A third-order finite difference weighted essentially non-oscillatory scheme with shallow neural network [9.652617666391926]
We introduce the finite difference weighted essentially non-oscillatory (WENO) scheme based on the neural network for hyperbolic conservation laws.
We employ the supervised learning and design two loss functions, one with the mean squared error and the other with the mean squared logarithmic error, where the WENO3-JS weights are computed as the labels.
These constructed WENO3-SNN schemes show the outperformed results in one-dimensional examples and improved behavior in two-dimensional examples, compared with the simulations from WENO3-JS and WENO3-Z.
arXiv Detail & Related papers (2024-07-08T18:55:57Z) - Enhancing lattice kinetic schemes for fluid dynamics with Lattice-Equivariant Neural Networks [79.16635054977068]
We present a new class of equivariant neural networks, dubbed Lattice-Equivariant Neural Networks (LENNs)
Our approach develops within a recently introduced framework aimed at learning neural network-based surrogate models Lattice Boltzmann collision operators.
Our work opens towards practical utilization of machine learning-augmented Lattice Boltzmann CFD in real-world simulations.
arXiv Detail & Related papers (2024-05-22T17:23:15Z) - Learning WENO for entropy stable schemes to solve conservation laws [0.0]
TeCNO schemes form a class of arbitrary high-order entropy stable finite difference solvers.<n>Third-order weighted essentially non-oscillatory (WENO) schemes have been designed to satisfy the sign property.<n>We propose a variant of the SP-WENO, termed as Deep SignPreserving WENO (DSPWENO), where a neural network is trained to learn the WENO weighting strategy.
arXiv Detail & Related papers (2024-03-21T21:39:05Z) - Guaranteed Approximation Bounds for Mixed-Precision Neural Operators [83.64404557466528]
We build on intuition that neural operator learning inherently induces an approximation error.
We show that our approach reduces GPU memory usage by up to 50% and improves throughput by 58% with little or no reduction in accuracy.
arXiv Detail & Related papers (2023-07-27T17:42:06Z) - Benign Overfitting in Deep Neural Networks under Lazy Training [72.28294823115502]
We show that when the data distribution is well-separated, DNNs can achieve Bayes-optimal test error for classification.
Our results indicate that interpolating with smoother functions leads to better generalization.
arXiv Detail & Related papers (2023-05-30T19:37:44Z) - Neural Basis Functions for Accelerating Solutions to High Mach Euler
Equations [63.8376359764052]
We propose an approach to solving partial differential equations (PDEs) using a set of neural networks.
We regress a set of neural networks onto a reduced order Proper Orthogonal Decomposition (POD) basis.
These networks are then used in combination with a branch network that ingests the parameters of the prescribed PDE to compute a reduced order approximation to the PDE.
arXiv Detail & Related papers (2022-08-02T18:27:13Z) - Mean-Field Analysis of Two-Layer Neural Networks: Global Optimality with
Linear Convergence Rates [7.094295642076582]
Mean-field regime is a theoretically attractive alternative to the NTK (lazy training) regime.
We establish a new linear convergence result for two-layer neural networks trained by continuous-time noisy descent in the mean-field regime.
arXiv Detail & Related papers (2022-05-19T21:05:40Z) - Mean-field Analysis of Piecewise Linear Solutions for Wide ReLU Networks [83.58049517083138]
We consider a two-layer ReLU network trained via gradient descent.
We show that SGD is biased towards a simple solution.
We also provide empirical evidence that knots at locations distinct from the data points might occur.
arXiv Detail & Related papers (2021-11-03T15:14:20Z) - Robust Implicit Networks via Non-Euclidean Contractions [63.91638306025768]
Implicit neural networks show improved accuracy and significant reduction in memory consumption.
They can suffer from ill-posedness and convergence instability.
This paper provides a new framework to design well-posed and robust implicit neural networks.
arXiv Detail & Related papers (2021-06-06T18:05:02Z) - Learning Frequency Domain Approximation for Binary Neural Networks [68.79904499480025]
We propose to estimate the gradient of sign function in the Fourier frequency domain using the combination of sine functions for training BNNs.
The experiments on several benchmark datasets and neural architectures illustrate that the binary network learned using our method achieves the state-of-the-art accuracy.
arXiv Detail & Related papers (2021-03-01T08:25:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.