Lagrangian Flow Networks for Conservation Laws
- URL: http://arxiv.org/abs/2305.16846v2
- Date: Wed, 13 Dec 2023 21:22:14 GMT
- Title: Lagrangian Flow Networks for Conservation Laws
- Authors: F. Arend Torres, Marcello Massimo Negri, Marco Inversi, Jonathan
Aellen, Volker Roth
- Abstract summary: We introduce Lagrangian Flow Networks (LFlows) for modeling fluid densities and velocities continuously in space and time.
LFlows show higher predictive accuracy in density modeling tasks compared to competing models in 2D and 3D.
As a real-world application, we model bird migration based on sparse weather radar measurements.
- Score: 1.3385962626827215
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce Lagrangian Flow Networks (LFlows) for modeling fluid densities
and velocities continuously in space and time. By construction, the proposed
LFlows satisfy the continuity equation, a PDE describing mass conservation in
its differentiable form. Our model is based on the insight that solutions to
the continuity equation can be expressed as time-dependent density
transformations via differentiable and invertible maps. This follows from
classical theory of the existence and uniqueness of Lagrangian flows for smooth
vector fields. Hence, we model fluid densities by transforming a base density
with parameterized diffeomorphisms conditioned on time. The key benefit
compared to methods relying on numerical ODE solvers or PINNs is that the
analytic expression of the velocity is always consistent with changes in
density. Furthermore, we require neither expensive numerical solvers, nor
additional penalties to enforce the PDE. LFlows show higher predictive accuracy
in density modeling tasks compared to competing models in 2D and 3D, while
being computationally efficient. As a real-world application, we model bird
migration based on sparse weather radar measurements.
Related papers
- Diffusion Density Estimators [0.0]
We introduce a new, highly parallelizable method that computes log densities without the need to solve a flow.
Our approach is based on estimating a path integral by Monte Carlo, in a manner identical to the simulation-free training of diffusion models.
arXiv Detail & Related papers (2024-10-09T15:21:53Z) - PINF: Continuous Normalizing Flows for Physics-Constrained Deep Learning [8.000355537589224]
In this paper, we introduce Physics-Informed Normalizing Flows (PINF), a novel extension of continuous normalizing flows.
Our method, which is mesh-free and causality-free, can efficiently solve high dimensional time-dependent and steady-state Fokker-Planck equations.
arXiv Detail & Related papers (2023-09-26T15:38:57Z) - Tensor network reduced order models for wall-bounded flows [0.0]
We introduce a widely applicable tensor network-based framework for developing reduced order models.
We consider the incompressible Navier-Stokes equations and the lid-driven cavity in two spatial dimensions.
arXiv Detail & Related papers (2023-03-06T10:33:00Z) - Deep Learning Closure Models for Large-Eddy Simulation of Flows around
Bluff Bodies [0.0]
deep learning model for large-eddy simulation (LES) is developed and evaluated for incompressible flows around a rectangular cylinder at moderate Reynolds numbers.
DL-LES model is trained using adjoint PDE optimization methods to match, as closely as possible, direct numerical simulation (DNS) data.
We study the accuracy of the DL-LES model for predicting the drag coefficient, mean flow, and Reynolds stress.
arXiv Detail & Related papers (2022-08-06T11:25:50Z) - Manifold Interpolating Optimal-Transport Flows for Trajectory Inference [64.94020639760026]
We present a method called Manifold Interpolating Optimal-Transport Flow (MIOFlow)
MIOFlow learns, continuous population dynamics from static snapshot samples taken at sporadic timepoints.
We evaluate our method on simulated data with bifurcations and merges, as well as scRNA-seq data from embryoid body differentiation, and acute myeloid leukemia treatment.
arXiv Detail & Related papers (2022-06-29T22:19:03Z) - Self-Consistency of the Fokker-Planck Equation [117.17004717792344]
The Fokker-Planck equation governs the density evolution of the Ito process.
Ground-truth velocity field can be shown to be the solution of a fixed-point equation.
In this paper, we exploit this concept to design a potential function of the hypothesis velocity fields.
arXiv Detail & Related papers (2022-06-02T03:44:23Z) - Neural Flows: Efficient Alternative to Neural ODEs [8.01886971335823]
We propose an alternative by directly modeling the solution curves - the flow of an ODE - with a neural network.
This immediately eliminates the need for expensive numerical solvers while still maintaining the modeling capability of neural ODEs.
arXiv Detail & Related papers (2021-10-25T15:24:45Z) - Moser Flow: Divergence-based Generative Modeling on Manifolds [49.04974733536027]
Moser Flow (MF) is a new class of generative models within the family of continuous normalizing flows (CNF)
MF does not require invoking or backpropagating through an ODE solver during training.
We demonstrate for the first time the use of flow models for sampling from general curved surfaces.
arXiv Detail & Related papers (2021-08-18T09:00:24Z) - Discrete Denoising Flows [87.44537620217673]
We introduce a new discrete flow-based model for categorical random variables: Discrete Denoising Flows (DDFs)
In contrast with other discrete flow-based models, our model can be locally trained without introducing gradient bias.
We show that DDFs outperform Discrete Flows on modeling a toy example, binary MNIST and Cityscapes segmentation maps, measured in log-likelihood.
arXiv Detail & Related papers (2021-07-24T14:47:22Z) - Large-Scale Wasserstein Gradient Flows [84.73670288608025]
We introduce a scalable scheme to approximate Wasserstein gradient flows.
Our approach relies on input neural networks (ICNNs) to discretize the JKO steps.
As a result, we can sample from the measure at each step of the gradient diffusion and compute its density.
arXiv Detail & Related papers (2021-06-01T19:21:48Z) - A Near-Optimal Gradient Flow for Learning Neural Energy-Based Models [93.24030378630175]
We propose a novel numerical scheme to optimize the gradient flows for learning energy-based models (EBMs)
We derive a second-order Wasserstein gradient flow of the global relative entropy from Fokker-Planck equation.
Compared with existing schemes, Wasserstein gradient flow is a smoother and near-optimal numerical scheme to approximate real data densities.
arXiv Detail & Related papers (2019-10-31T02:26:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.