Application of Machine Learning and Convex Limiting to Subgrid Flux Modeling in the Shallow-Water Equations
- URL: http://arxiv.org/abs/2407.17214v1
- Date: Wed, 24 Jul 2024 12:14:19 GMT
- Title: Application of Machine Learning and Convex Limiting to Subgrid Flux Modeling in the Shallow-Water Equations
- Authors: Ilya Timofeyev, Alexey Schwarzmann, Dmitri Kuzmin,
- Abstract summary: We propose a combination of machine learning and flux limiting for property-preserving subgrid scale modeling.
The results of our numerical studies confirm that the proposed combination of machine learning with monolithic convex limiting produces meaningful closures even in scenarios for which the network was not trained.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a combination of machine learning and flux limiting for property-preserving subgrid scale modeling in the context of flux-limited finite volume methods for the one-dimensional shallow-water equations. The numerical fluxes of a conservative target scheme are fitted to the coarse-mesh averages of a monotone fine-grid discretization using a neural network to parametrize the subgrid scale components. To ensure positivity preservation and the validity of local maximum principles, we use a flux limiter that constrains the intermediate states of an equivalent fluctuation form to stay in a convex admissible set. The results of our numerical studies confirm that the proposed combination of machine learning with monolithic convex limiting produces meaningful closures even in scenarios for which the network was not trained.
Related papers
- Probabilistic Flux Limiters [0.873811641236639]
A popular method to virtually eliminate Gibbs oscillations in under-resolved simulations is to use a flux limiter.
Here, we introduce a conceptually distinct type of flux limiter that is designed to handle the effects of randomness in the model.
We show that a machine learned, probabilistic flux limiter may be used in a shock capturing code to more accurately capture shock profiles.
arXiv Detail & Related papers (2024-05-13T21:06:53Z) - Robust Stochastically-Descending Unrolled Networks [85.6993263983062]
Deep unrolling is an emerging learning-to-optimize method that unrolls a truncated iterative algorithm in the layers of a trainable neural network.
We show that convergence guarantees and generalizability of the unrolled networks are still open theoretical problems.
We numerically assess unrolled architectures trained under the proposed constraints in two different applications.
arXiv Detail & Related papers (2023-12-25T18:51:23Z) - The Convex Landscape of Neural Networks: Characterizing Global Optima
and Stationary Points via Lasso Models [75.33431791218302]
Deep Neural Network Network (DNN) models are used for programming purposes.
In this paper we examine the use of convex neural recovery models.
We show that all the stationary non-dimensional objective objective can be characterized as the standard a global subsampled convex solvers program.
We also show that all the stationary non-dimensional objective objective can be characterized as the standard a global subsampled convex solvers program.
arXiv Detail & Related papers (2023-12-19T23:04:56Z) - A variational neural network approach for glacier modelling with
nonlinear rheology [1.4438155481047366]
We first formulate the solution of non-Newtonian ice flow model into the minimizer of a variational integral with boundary constraints.
The solution is then approximated by a deep neural network whose loss function is the variational integral plus soft constraint from the mixed boundary conditions.
To address instability in real-world scaling, we re-normalize the input of the network at the first layer and balance the regularizing factors for each individual boundary.
arXiv Detail & Related papers (2022-09-05T18:23:59Z) - Deep Learning for the Benes Filter [91.3755431537592]
We present a new numerical method based on the mesh-free neural network representation of the density of the solution of the Benes model.
We discuss the role of nonlinearity in the filtering model equations for the choice of the domain of the neural network.
arXiv Detail & Related papers (2022-03-09T14:08:38Z) - Deep Learning Approximation of Diffeomorphisms via Linear-Control
Systems [91.3755431537592]
We consider a control system of the form $dot x = sum_i=1lF_i(x)u_i$, with linear dependence in the controls.
We use the corresponding flow to approximate the action of a diffeomorphism on a compact ensemble of points.
arXiv Detail & Related papers (2021-10-24T08:57:46Z) - Neural UpFlow: A Scene Flow Learning Approach to Increase the Apparent
Resolution of Particle-Based Liquids [0.6882042556551611]
We present a novel up-resing technique for generating high-resolution liquids based on scene flow estimation using deep neural networks.
Our approach infers and synthesizes small- and large-scale details solely from a low-resolution particle-based liquid simulation.
arXiv Detail & Related papers (2021-06-09T15:36:23Z) - Robust Implicit Networks via Non-Euclidean Contractions [63.91638306025768]
Implicit neural networks show improved accuracy and significant reduction in memory consumption.
They can suffer from ill-posedness and convergence instability.
This paper provides a new framework to design well-posed and robust implicit neural networks.
arXiv Detail & Related papers (2021-06-06T18:05:02Z) - Deep Magnification-Flexible Upsampling over 3D Point Clouds [103.09504572409449]
We propose a novel end-to-end learning-based framework to generate dense point clouds.
We first formulate the problem explicitly, which boils down to determining the weights and high-order approximation errors.
Then, we design a lightweight neural network to adaptively learn unified and sorted weights as well as the high-order refinements.
arXiv Detail & Related papers (2020-11-25T14:00:18Z) - Solving frustrated Ising models using tensor networks [0.0]
We develop a framework to study frustrated Ising models in terms of infinite tensor networks %.
We show that optimizing the choice of clusters, including the weight on shared bonds, is crucial for the contractibility of the tensor networks.
We illustrate the power of the method by computing the residual entropy of a frustrated Ising spin system on the kagome lattice with next-next-nearest neighbour interactions.
arXiv Detail & Related papers (2020-06-25T12:39:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.