Application of machine learning technique for a fast forecast of
aggregation kinetics in space-inhomogeneous systems
- URL: http://arxiv.org/abs/2312.04660v1
- Date: Thu, 7 Dec 2023 19:50:40 GMT
- Title: Application of machine learning technique for a fast forecast of
aggregation kinetics in space-inhomogeneous systems
- Authors: M.A. Larchenko, R.R. Zagidullin, V.V. Palyulin, N.V. Brilliantov
- Abstract summary: We show how to reduce the amount of direct computations with the use of modern machine learning (ML) techniques.
We demonstrate that the ML predictions for the space distribution of aggregates and their size distribution requires drastically less computation time and agrees fairly well with the results of direct numerical simulations.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Modeling of aggregation processes in space-inhomogeneous systems is extremely
numerically challenging since complicated aggregation equations -- Smoluchowski
equations are to be solved at each space point along with the computation of
particle propagation. Low rank approximation for the aggregation kernels can
significantly speed up the solution of Smoluchowski equations, while particle
propagation could be done in parallel. Yet the simulations with many aggregate
sizes remain quite resource-demanding. Here, we explore the way to reduce the
amount of direct computations with the use of modern machine learning (ML)
techniques. Namely, we propose to replace the actual numerical solution of the
Smoluchowki equations with the respective density transformations learned with
the application of the conditional normalising flow. We demonstrate that the ML
predictions for the space distribution of aggregates and their size
distribution requires drastically less computation time and agrees fairly well
with the results of direct numerical simulations. Such an opportunity of a
quick forecast of space-dependent particle size distribution could be important
in practice, especially for the online prediction and visualisation of
pollution processes, providing a tool with a reasonable tradeoff between the
prediction accuracy and the computational time.
Related papers
- MultiPDENet: PDE-embedded Learning with Multi-time-stepping for Accelerated Flow Simulation [48.41289705783405]
We propose a PDE-embedded network with multiscale time stepping (MultiPDENet)
In particular, we design a convolutional filter based on the structure of finite difference with a small number of parameters to optimize.
A Physics Block with a 4th-order Runge-Kutta integrator at the fine time scale is established that embeds the structure of PDEs to guide the prediction.
arXiv Detail & Related papers (2025-01-27T12:15:51Z) - Parallel simulation for sampling under isoperimetry and score-based diffusion models [56.39904484784127]
As data size grows, reducing the iteration cost becomes an important goal.
Inspired by the success of the parallel simulation of the initial value problem in scientific computation, we propose parallel Picard methods for sampling tasks.
Our work highlights the potential advantages of simulation methods in scientific computation for dynamics-based sampling and diffusion models.
arXiv Detail & Related papers (2024-12-10T11:50:46Z) - Monte Carlo Neural PDE Solver for Learning PDEs via Probabilistic Representation [59.45669299295436]
We propose a Monte Carlo PDE solver for training unsupervised neural solvers.
We use the PDEs' probabilistic representation, which regards macroscopic phenomena as ensembles of random particles.
Our experiments on convection-diffusion, Allen-Cahn, and Navier-Stokes equations demonstrate significant improvements in accuracy and efficiency.
arXiv Detail & Related papers (2023-02-10T08:05:19Z) - Accelerating Part-Scale Simulation in Liquid Metal Jet Additive
Manufacturing via Operator Learning [0.0]
Part-scale predictions require many small-scale simulations.
A model describing droplet coalescence for LMJ may include coupled incompressible fluid flow, heat transfer, and phase change equations.
We apply an operator learning approach to learn a mapping between initial and final states of the droplet coalescence process.
arXiv Detail & Related papers (2022-02-02T17:24:16Z) - Large-Scale Wasserstein Gradient Flows [84.73670288608025]
We introduce a scalable scheme to approximate Wasserstein gradient flows.
Our approach relies on input neural networks (ICNNs) to discretize the JKO steps.
As a result, we can sample from the measure at each step of the gradient diffusion and compute its density.
arXiv Detail & Related papers (2021-06-01T19:21:48Z) - Deep learning approaches to surrogates for solving the diffusion
equation for mechanistic real-world simulations [0.0]
In medical, biological, physical and engineered models the numerical solution of partial differential equations (PDEs) can make simulations impractically slow.
Machine learning surrogates, neural networks trained to provide approximate solutions to such complicated numerical problems, can often provide speed-ups of several orders of magnitude compared to direct calculation.
We use a Convolutional Neural Network to approximate the stationary solution to the diffusion equation in the case of two equal-diameter, circular, constant-value sources.
arXiv Detail & Related papers (2021-02-10T16:15:17Z) - Particles to Partial Differential Equations Parsimoniously [0.0]
coarse-grained effective Partial Differential Equations can lead to considerable savings in computation-intensive tasks like prediction or control.
We propose a framework combining artificial neural networks with multiscale computation, in the form of equation-free numerics.
We illustrate our approach by extracting coarse-grained evolution equations from particle-based simulations with a priori unknown macro-scale variable.
arXiv Detail & Related papers (2020-11-09T15:51:24Z) - Large-scale Neural Solvers for Partial Differential Equations [48.7576911714538]
Solving partial differential equations (PDE) is an indispensable part of many branches of science as many processes can be modelled in terms of PDEs.
Recent numerical solvers require manual discretization of the underlying equation as well as sophisticated, tailored code for distributed computing.
We examine the applicability of continuous, mesh-free neural solvers for partial differential equations, physics-informed neural networks (PINNs)
We discuss the accuracy of GatedPINN with respect to analytical solutions -- as well as state-of-the-art numerical solvers, such as spectral solvers.
arXiv Detail & Related papers (2020-09-08T13:26:51Z) - A Near-Optimal Gradient Flow for Learning Neural Energy-Based Models [93.24030378630175]
We propose a novel numerical scheme to optimize the gradient flows for learning energy-based models (EBMs)
We derive a second-order Wasserstein gradient flow of the global relative entropy from Fokker-Planck equation.
Compared with existing schemes, Wasserstein gradient flow is a smoother and near-optimal numerical scheme to approximate real data densities.
arXiv Detail & Related papers (2019-10-31T02:26:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.