Learning second-order TVD flux limiters using differentiable solvers
- URL: http://arxiv.org/abs/2503.09625v1
- Date: Tue, 11 Mar 2025 01:19:39 GMT
- Title: Learning second-order TVD flux limiters using differentiable solvers
- Authors: Chenyang Huang, Amal S. Sebastian, Venkatasubramanian Viswanathan,
- Abstract summary: This paper presents a data-driven framework for learning optimal second-order total variation diminishing (TVD) flux limiters via differentiable simulations.<n>In our fully differentiable finite volume solvers, the limiter functions are replaced by neural networks.<n>We show that a limiter trained solely on linear advection exhibits strong generalizability, surpassing the accuracy of most classical flux limiters.
- Score: 2.4746157841644267
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents a data-driven framework for learning optimal second-order total variation diminishing (TVD) flux limiters via differentiable simulations. In our fully differentiable finite volume solvers, the limiter functions are replaced by neural networks. By representing the limiter as a pointwise convex linear combination of the Minmod and Superbee limiters, we enforce both second-order accuracy and TVD constraints at all stages of training. Our approach leverages gradient-based optimization through automatic differentiation, allowing a direct backpropagation of errors from numerical solutions to the limiter parameters. We demonstrate the effectiveness of this method on various hyperbolic conservation laws, including the linear advection equation, the Burgers' equation, and the one-dimensional Euler equations. Remarkably, a limiter trained solely on linear advection exhibits strong generalizability, surpassing the accuracy of most classical flux limiters across a range of problems with shocks and discontinuities. The learned flux limiters can be readily integrated into existing computational fluid dynamics codes, and the proposed methodology also offers a flexible pathway to systematically develop and optimize flux limiters for complex flow problems.
Related papers
- Extremization to Fine Tune Physics Informed Neural Networks for Solving Boundary Value Problems [0.1874930567916036]
Theory of Functional Connections (TFC) is used to exactly impose initial and boundary conditions (IBCs) of (I)BVPs on PINNs.
We propose a modification to the TFC framework named Reduced TFC and show a significant improvement in the training and inference time of PINNs.
arXiv Detail & Related papers (2024-06-07T23:25:13Z) - Learning Constrained Optimization with Deep Augmented Lagrangian Methods [54.22290715244502]
A machine learning (ML) model is trained to emulate a constrained optimization solver.
This paper proposes an alternative approach, in which the ML model is trained to predict dual solution estimates directly.
It enables an end-to-end training scheme is which the dual objective is as a loss function, and solution estimates toward primal feasibility, emulating a Dual Ascent method.
arXiv Detail & Related papers (2024-03-06T04:43:22Z) - Further improving quantum algorithms for nonlinear differential
equations via higher-order methods and rescaling [0.0]
We present three main improvements to existing quantum algorithms based on the Carleman linearisation technique.
By using a high-precision technique for the solution of the linearised differential equations, we achieve logarithmic dependence of the complexity on the error and near-linear dependence on time.
A rescaling technique can considerably reduce the cost, which would otherwise be exponential in the Carleman order for a system of ODEs.
arXiv Detail & Related papers (2023-12-15T03:52:44Z) - Capturing the Diffusive Behavior of the Multiscale Linear Transport
Equations by Asymptotic-Preserving Convolutional DeepONets [31.88833218777623]
We introduce two types of novel Asymptotic-Preserving Convolutional Deep Operator Networks (APCONs)
We propose a new architecture called Convolutional Deep Operator Networks, which employ multiple local convolution operations instead of a global heat kernel.
Our APCON methods possess a parameter count that is independent of the grid size and are capable of capturing the diffusive behavior of the linear transport problem.
arXiv Detail & Related papers (2023-06-28T03:16:45Z) - Constrained Optimization via Exact Augmented Lagrangian and Randomized
Iterative Sketching [55.28394191394675]
We develop an adaptive inexact Newton method for equality-constrained nonlinear, nonIBS optimization problems.
We demonstrate the superior performance of our method on benchmark nonlinear problems, constrained logistic regression with data from LVM, and a PDE-constrained problem.
arXiv Detail & Related papers (2023-05-28T06:33:37Z) - A Functional-Space Mean-Field Theory of Partially-Trained Three-Layer
Neural Networks [49.870593940818715]
We study the infinite-width limit of a type of three-layer NN model whose first layer is random and fixed.
Our theory accommodates different scaling choices of the model, resulting in two regimes of the MF limit that demonstrate distinctive behaviors.
arXiv Detail & Related papers (2022-10-28T17:26:27Z) - Deep Learning Approximation of Diffeomorphisms via Linear-Control
Systems [91.3755431537592]
We consider a control system of the form $dot x = sum_i=1lF_i(x)u_i$, with linear dependence in the controls.
We use the corresponding flow to approximate the action of a diffeomorphism on a compact ensemble of points.
arXiv Detail & Related papers (2021-10-24T08:57:46Z) - Self Normalizing Flows [65.73510214694987]
We propose a flexible framework for training normalizing flows by replacing expensive terms in the gradient by learned approximate inverses at each layer.
This reduces the computational complexity of each layer's exact update from $mathcalO(D3)$ to $mathcalO(D2)$.
We show experimentally that such models are remarkably stable and optimize to similar data likelihood values as their exact gradient counterparts.
arXiv Detail & Related papers (2020-11-14T09:51:51Z) - Limiting Behaviors of Nonconvex-Nonconcave Minimax Optimization via
Continuous-Time Systems [10.112779201155005]
We study the limiting behaviors of three classic minimax algorithms: gradient descent (AGDA), ascentGDA, and the extragradient method (EGM)
Numerically, we observe that all limiting behaviors can arise in Generative Adrial Networks (GAN) and are easily demonstrated for a range of GAN problems.
arXiv Detail & Related papers (2020-10-20T21:14:51Z) - Conditional gradient methods for stochastically constrained convex
minimization [54.53786593679331]
We propose two novel conditional gradient-based methods for solving structured convex optimization problems.
The most important feature of our framework is that only a subset of the constraints is processed at each iteration.
Our algorithms rely on variance reduction and smoothing used in conjunction with conditional gradient steps, and are accompanied by rigorous convergence guarantees.
arXiv Detail & Related papers (2020-07-07T21:26:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.