A DeepParticle method for learning and generating aggregation patterns
in multi-dimensional Keller-Segel chemotaxis systems
- URL: http://arxiv.org/abs/2209.00109v2
- Date: Mon, 29 Jan 2024 07:25:36 GMT
- Title: A DeepParticle method for learning and generating aggregation patterns
in multi-dimensional Keller-Segel chemotaxis systems
- Authors: Zhongjian Wang, Jack Xin, Zhiwen Zhang
- Abstract summary: We study a regularized interacting particle method for computing aggregation patterns and near singular solutions of a Keller-Segal (KS) chemotaxis system in two and three space dimensions.
We further develop DeepParticle (DP) method to learn and generate solutions under variations of physical parameters.
- Score: 3.6184545598911724
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study a regularized interacting particle method for computing aggregation
patterns and near singular solutions of a Keller-Segal (KS) chemotaxis system
in two and three space dimensions, then further develop DeepParticle (DP)
method to learn and generate solutions under variations of physical parameters.
The KS solutions are approximated as empirical measures of particles which
self-adapt to the high gradient part of solutions. We utilize the
expressiveness of deep neural networks (DNNs) to represent the transform of
samples from a given initial (source) distribution to a target distribution at
finite time T prior to blowup without assuming invertibility of the transforms.
In the training stage, we update the network weights by minimizing a discrete
2-Wasserstein distance between the input and target empirical measures. To
reduce computational cost, we develop an iterative divide-and-conquer algorithm
to find the optimal transition matrix in the Wasserstein distance. We present
numerical results of DP framework for successful learning and generation of KS
dynamics in the presence of laminar and chaotic flows. The physical parameter
in this work is either the small diffusivity of chemo-attractant or the
reciprocal of the flow amplitude in the advection-dominated regime.
Related papers
- A hybrid FEM-PINN method for time-dependent partial differential equations [9.631238071993282]
We present a hybrid numerical method for solving evolution differential equations (PDEs) by merging the time finite element method with deep neural networks.
The advantages of such a hybrid formulation are twofold: statistical errors are avoided for the integral in the time direction, and the neural network's output can be regarded as a set of reduced spatial basis functions.
arXiv Detail & Related papers (2024-09-04T15:28:25Z) - Solving the Discretised Multiphase Flow Equations with Interface
Capturing on Structured Grids Using Machine Learning Libraries [0.6299766708197884]
This paper solves the discretised multiphase flow equations using tools and methods from machine-learning libraries.
For the first time, finite element discretisations of multiphase flows can be solved using an approach based on (untrained) convolutional neural networks.
arXiv Detail & Related papers (2024-01-12T18:42:42Z) - Adaptive importance sampling for Deep Ritz [7.123920027048777]
We introduce an adaptive sampling method for the Deep Ritz method aimed at solving partial differential equations (PDEs)
One network is employed to approximate the solution of PDEs, while the other one is a deep generative model used to generate new collocation points to refine the training set.
Compared to the original Deep Ritz method, the proposed adaptive method improves accuracy, especially for problems characterized by low regularity and high dimensionality.
arXiv Detail & Related papers (2023-10-26T06:35:08Z) - Monte Carlo Neural PDE Solver for Learning PDEs via Probabilistic Representation [59.45669299295436]
We propose a Monte Carlo PDE solver for training unsupervised neural solvers.
We use the PDEs' probabilistic representation, which regards macroscopic phenomena as ensembles of random particles.
Our experiments on convection-diffusion, Allen-Cahn, and Navier-Stokes equations demonstrate significant improvements in accuracy and efficiency.
arXiv Detail & Related papers (2023-02-10T08:05:19Z) - Deep Learning for the Benes Filter [91.3755431537592]
We present a new numerical method based on the mesh-free neural network representation of the density of the solution of the Benes model.
We discuss the role of nonlinearity in the filtering model equations for the choice of the domain of the neural network.
arXiv Detail & Related papers (2022-03-09T14:08:38Z) - An application of the splitting-up method for the computation of a
neural network representation for the solution for the filtering equations [68.8204255655161]
Filtering equations play a central role in many real-life applications, including numerical weather prediction, finance and engineering.
One of the classical approaches to approximate the solution of the filtering equations is to use a PDE inspired method, called the splitting-up method.
We combine this method with a neural network representation to produce an approximation of the unnormalised conditional distribution of the signal process.
arXiv Detail & Related papers (2022-01-10T11:01:36Z) - DeepParticle: learning invariant measure by a deep neural network
minimizing Wasserstein distance on data generated from an interacting
particle method [3.6310242206800667]
We introduce the so called DeepParticle method to learn and generate invariant measures of dynamical systems.
We use neural deep networks (DNNs) to represent the transform of samples from a given input (source) distribution to an arbitrary target distribution.
In training, we update the network weights to minimize a discrete Wasserstein distance between the input and target samples.
arXiv Detail & Related papers (2021-11-02T03:48:58Z) - ResNet-LDDMM: Advancing the LDDMM Framework Using Deep Residual Networks [86.37110868126548]
In this work, we make use of deep residual neural networks to solve the non-stationary ODE (flow equation) based on a Euler's discretization scheme.
We illustrate these ideas on diverse registration problems of 3D shapes under complex topology-preserving transformations.
arXiv Detail & Related papers (2021-02-16T04:07:13Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z) - Compressive MRI quantification using convex spatiotemporal priors and
deep auto-encoders [2.5060548079588516]
We propose a dictionary-free pipeline for multi-parametric image computing.
Our approach has two stages based on compressed sensing reconstruction and learned quantitative inference.
arXiv Detail & Related papers (2020-01-23T17:15:42Z) - A Near-Optimal Gradient Flow for Learning Neural Energy-Based Models [93.24030378630175]
We propose a novel numerical scheme to optimize the gradient flows for learning energy-based models (EBMs)
We derive a second-order Wasserstein gradient flow of the global relative entropy from Fokker-Planck equation.
Compared with existing schemes, Wasserstein gradient flow is a smoother and near-optimal numerical scheme to approximate real data densities.
arXiv Detail & Related papers (2019-10-31T02:26:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.