From Initial Data to Boundary Layers: Neural Networks for Nonlinear Hyperbolic Conservation Laws
- URL: http://arxiv.org/abs/2506.01453v1
- Date: Mon, 02 Jun 2025 09:12:13 GMT
- Title: From Initial Data to Boundary Layers: Neural Networks for Nonlinear Hyperbolic Conservation Laws
- Authors: Igor Ciril, Khalil Haddaoui, Yohann Tendero,
- Abstract summary: We address the approximation of entropy solutions to initial-boundary value problems for nonlinear strictly hyperbolic conservation laws using neural networks.<n>A general and systematic framework is introduced for the design of efficient and reliable learning algorithms, combining fast convergence during training with accurate predictions.
- Score: 0.0
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: We address the approximation of entropy solutions to initial-boundary value problems for nonlinear strictly hyperbolic conservation laws using neural networks. A general and systematic framework is introduced for the design of efficient and reliable learning algorithms, combining fast convergence during training with accurate predictions. The methodology is assessed through a series of one-dimensional scalar test cases, highlighting its potential applicability to more complex industrial scenarios.
Related papers
- Neural Entropy-stable conservative flux form neural networks for learning hyperbolic conservation laws [2.8680286413498903]
We propose a neural entropy-stable conservative flux form neural network (NESCFN) for learning hyperbolic conservation laws.<n>Our approach removes this dependency by embedding entropy-stable design principles into the learning process itself.
arXiv Detail & Related papers (2025-07-02T15:18:04Z) - Certified Neural Approximations of Nonlinear Dynamics [52.79163248326912]
In safety-critical contexts, the use of neural approximations requires formal bounds on their closeness to the underlying system.<n>We propose a novel, adaptive, and parallelizable verification method based on certified first-order models.
arXiv Detail & Related papers (2025-05-21T13:22:20Z) - Neural Contraction Metrics with Formal Guarantees for Discrete-Time Nonlinear Dynamical Systems [17.905596843865705]
Contraction metrics provide a powerful framework for analyzing stability, robustness, and convergence of various dynamical systems.<n>However, identifying these metrics for complex nonlinear systems remains an open challenge due to the lack of effective tools.<n>This paper develops verifiable contraction metrics for discrete scalable nonlinear systems.
arXiv Detail & Related papers (2025-04-23T21:27:32Z) - Fréchet Cumulative Covariance Net for Deep Nonlinear Sufficient Dimension Reduction with Random Objects [22.156257535146004]
We introduce a new statistical dependence measure termed Fr'echet Cumulative Covariance (FCCov) and develop a novel nonlinear SDR framework based on FCCov.<n>Our approach is not only applicable to complex non-Euclidean data, but also exhibits robustness against outliers.<n>We prove that our method with squared Frobenius norm regularization achieves unbiasedness at the $sigma$-field level.
arXiv Detail & Related papers (2025-02-21T10:55:50Z) - ENFORCE: Nonlinear Constrained Learning with Adaptive-depth Neural Projection [0.0]
We introduce ENFORCE, a neural network architecture that uses an adaptive projection module (AdaNP) to enforce nonlinear equality constraints in the predictions.<n>We prove that our projection mapping is 1-Lipschitz, making it well-suited for stable training.<n>The predictions of our new architecture satisfy $N_C$ equality constraints that are nonlinear in both the inputs and outputs of the neural network.
arXiv Detail & Related papers (2025-02-10T18:52:22Z) - The Convex Landscape of Neural Networks: Characterizing Global Optima
and Stationary Points via Lasso Models [75.33431791218302]
Deep Neural Network Network (DNN) models are used for programming purposes.
In this paper we examine the use of convex neural recovery models.
We show that all the stationary non-dimensional objective objective can be characterized as the standard a global subsampled convex solvers program.
We also show that all the stationary non-dimensional objective objective can be characterized as the standard a global subsampled convex solvers program.
arXiv Detail & Related papers (2023-12-19T23:04:56Z) - On the generalization of learning algorithms that do not converge [54.122745736433856]
Generalization analyses of deep learning typically assume that the training converges to a fixed point.
Recent results indicate that in practice, the weights of deep neural networks optimized with gradient descent often oscillate indefinitely.
arXiv Detail & Related papers (2022-08-16T21:22:34Z) - Subquadratic Overparameterization for Shallow Neural Networks [60.721751363271146]
We provide an analytical framework that allows us to adopt standard neural training strategies.
We achieve the desiderata viaak-Lojasiewicz, smoothness, and standard assumptions.
arXiv Detail & Related papers (2021-11-02T20:24:01Z) - The Neural Network shifted-Proper Orthogonal Decomposition: a Machine
Learning Approach for Non-linear Reduction of Hyperbolic Equations [0.0]
In this work we approach the problem of automatically detecting the correct pre-processing transformation in a statistical learning framework.
The purely data-driven method allowed us to generalise the existing approaches of linear subspace manipulation to non-linear hyperbolic problems with unknown advection fields.
The proposed algorithm has been validated against simple test cases to benchmark its performances and later successfully applied to a multiphase simulation.
arXiv Detail & Related papers (2021-08-14T15:13:35Z) - Multivariate Deep Evidential Regression [77.34726150561087]
A new approach with uncertainty-aware neural networks shows promise over traditional deterministic methods.
We discuss three issues with a proposed solution to extract aleatoric and epistemic uncertainties from regression-based neural networks.
arXiv Detail & Related papers (2021-04-13T12:20:18Z) - Learning Fast Approximations of Sparse Nonlinear Regression [50.00693981886832]
In this work, we bridge the gap by introducing the Threshold Learned Iterative Shrinkage Algorithming (NLISTA)
Experiments on synthetic data corroborate our theoretical results and show our method outperforms state-of-the-art methods.
arXiv Detail & Related papers (2020-10-26T11:31:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.