Application of machine learning technique for a fast forecast of
aggregation kinetics in space-inhomogeneous systems
- URL: http://arxiv.org/abs/2312.04660v1
- Date: Thu, 7 Dec 2023 19:50:40 GMT
- Title: Application of machine learning technique for a fast forecast of
aggregation kinetics in space-inhomogeneous systems
- Authors: M.A. Larchenko, R.R. Zagidullin, V.V. Palyulin, N.V. Brilliantov
- Abstract summary: We show how to reduce the amount of direct computations with the use of modern machine learning (ML) techniques.
We demonstrate that the ML predictions for the space distribution of aggregates and their size distribution requires drastically less computation time and agrees fairly well with the results of direct numerical simulations.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Modeling of aggregation processes in space-inhomogeneous systems is extremely
numerically challenging since complicated aggregation equations -- Smoluchowski
equations are to be solved at each space point along with the computation of
particle propagation. Low rank approximation for the aggregation kernels can
significantly speed up the solution of Smoluchowski equations, while particle
propagation could be done in parallel. Yet the simulations with many aggregate
sizes remain quite resource-demanding. Here, we explore the way to reduce the
amount of direct computations with the use of modern machine learning (ML)
techniques. Namely, we propose to replace the actual numerical solution of the
Smoluchowki equations with the respective density transformations learned with
the application of the conditional normalising flow. We demonstrate that the ML
predictions for the space distribution of aggregates and their size
distribution requires drastically less computation time and agrees fairly well
with the results of direct numerical simulations. Such an opportunity of a
quick forecast of space-dependent particle size distribution could be important
in practice, especially for the online prediction and visualisation of
pollution processes, providing a tool with a reasonable tradeoff between the
prediction accuracy and the computational time.
Related papers
- Enhancing Computational Efficiency in Multiscale Systems Using Deep Learning of Coordinates and Flow Maps [0.0]
This paper showcases how deep learning techniques can be used to develop a precise time-stepping approach for multiscale systems.
The resulting framework achieves state-of-the-art predictive accuracy while incurring lesser computational costs.
arXiv Detail & Related papers (2024-04-28T14:05:13Z) - Monte Carlo Neural PDE Solver for Learning PDEs via Probabilistic Representation [59.45669299295436]
We propose a Monte Carlo PDE solver for training unsupervised neural solvers.
We use the PDEs' probabilistic representation, which regards macroscopic phenomena as ensembles of random particles.
Our experiments on convection-diffusion, Allen-Cahn, and Navier-Stokes equations demonstrate significant improvements in accuracy and efficiency.
arXiv Detail & Related papers (2023-02-10T08:05:19Z) - Importance sampling for stochastic quantum simulations [68.8204255655161]
We introduce the qDrift protocol, which builds random product formulas by sampling from the Hamiltonian according to the coefficients.
We show that the simulation cost can be reduced while achieving the same accuracy, by considering the individual simulation cost during the sampling stage.
Results are confirmed by numerical simulations performed on a lattice nuclear effective field theory.
arXiv Detail & Related papers (2022-12-12T15:06:32Z) - Accelerating Part-Scale Simulation in Liquid Metal Jet Additive
Manufacturing via Operator Learning [0.0]
Part-scale predictions require many small-scale simulations.
A model describing droplet coalescence for LMJ may include coupled incompressible fluid flow, heat transfer, and phase change equations.
We apply an operator learning approach to learn a mapping between initial and final states of the droplet coalescence process.
arXiv Detail & Related papers (2022-02-02T17:24:16Z) - Large-Scale Wasserstein Gradient Flows [84.73670288608025]
We introduce a scalable scheme to approximate Wasserstein gradient flows.
Our approach relies on input neural networks (ICNNs) to discretize the JKO steps.
As a result, we can sample from the measure at each step of the gradient diffusion and compute its density.
arXiv Detail & Related papers (2021-06-01T19:21:48Z) - Deep learning approaches to surrogates for solving the diffusion
equation for mechanistic real-world simulations [0.0]
In medical, biological, physical and engineered models the numerical solution of partial differential equations (PDEs) can make simulations impractically slow.
Machine learning surrogates, neural networks trained to provide approximate solutions to such complicated numerical problems, can often provide speed-ups of several orders of magnitude compared to direct calculation.
We use a Convolutional Neural Network to approximate the stationary solution to the diffusion equation in the case of two equal-diameter, circular, constant-value sources.
arXiv Detail & Related papers (2021-02-10T16:15:17Z) - Particles to Partial Differential Equations Parsimoniously [0.0]
coarse-grained effective Partial Differential Equations can lead to considerable savings in computation-intensive tasks like prediction or control.
We propose a framework combining artificial neural networks with multiscale computation, in the form of equation-free numerics.
We illustrate our approach by extracting coarse-grained evolution equations from particle-based simulations with a priori unknown macro-scale variable.
arXiv Detail & Related papers (2020-11-09T15:51:24Z) - Large-scale Neural Solvers for Partial Differential Equations [48.7576911714538]
Solving partial differential equations (PDE) is an indispensable part of many branches of science as many processes can be modelled in terms of PDEs.
Recent numerical solvers require manual discretization of the underlying equation as well as sophisticated, tailored code for distributed computing.
We examine the applicability of continuous, mesh-free neural solvers for partial differential equations, physics-informed neural networks (PINNs)
We discuss the accuracy of GatedPINN with respect to analytical solutions -- as well as state-of-the-art numerical solvers, such as spectral solvers.
arXiv Detail & Related papers (2020-09-08T13:26:51Z) - Real-Time Regression with Dividing Local Gaussian Processes [62.01822866877782]
Local Gaussian processes are a novel, computationally efficient modeling approach based on Gaussian process regression.
Due to an iterative, data-driven division of the input space, they achieve a sublinear computational complexity in the total number of training points in practice.
A numerical evaluation on real-world data sets shows their advantages over other state-of-the-art methods in terms of accuracy as well as prediction and update speed.
arXiv Detail & Related papers (2020-06-16T18:43:31Z) - A Near-Optimal Gradient Flow for Learning Neural Energy-Based Models [93.24030378630175]
We propose a novel numerical scheme to optimize the gradient flows for learning energy-based models (EBMs)
We derive a second-order Wasserstein gradient flow of the global relative entropy from Fokker-Planck equation.
Compared with existing schemes, Wasserstein gradient flow is a smoother and near-optimal numerical scheme to approximate real data densities.
arXiv Detail & Related papers (2019-10-31T02:26:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.