Machine Learning Product State Distributions from Initial Reactant
States for a Reactive Atom-Diatom Collision System
- URL: http://arxiv.org/abs/2111.03563v1
- Date: Fri, 5 Nov 2021 15:36:27 GMT
- Title: Machine Learning Product State Distributions from Initial Reactant
States for a Reactive Atom-Diatom Collision System
- Authors: Julian Arnold, Juan Carlos San Vicente Veliz, Debasish Koner, Narendra
Singh, Raymond J. Bemish, and Markus Meuwly
- Abstract summary: A machine learned (ML) model for predicting product state distributions from specific initial states is presented.
The prediction accuracy as quantified by the root-mean-squared difference is high for the test set and off-grid state specific initial conditions.
The STD model can be well-suited for simulating nonequilibrium high-speed flows.
- Score: 2.678461526933908
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: A machine learned (ML) model for predicting product state distributions from
specific initial states (state-to-distribution or STD) for reactive atom-diatom
collisions is presented and quantitatively tested for the N($^4$S)+O$_{2}$(X$^3
\Sigma_{\rm g}^{-}$) $\rightarrow$ NO(X$^2\Pi$) +O($^3$P) reaction. The
reference data set for training the neural network (NN) consists of final state
distributions determined from explicit quasi-classical trajectory (QCT)
simulations for $\sim 2000$ initial conditions. Overall, the prediction
accuracy as quantified by the root-mean-squared difference $(\sim 0.003)$ and
the $R^2$ $(\sim 0.99)$ between the reference QCT and predictions of the STD
model is high for the test set and off-grid state specific initial conditions
and for initial conditions drawn from reactant state distributions
characterized by translational, rotational and vibrational temperatures.
Compared with a more coarse grained distribution-to-distribution (DTD) model
evaluated on the same initial state distributions, the STD model shows
comparable performance with the additional benefit of the state resolution in
the reactant preparation. Starting from specific initial states also leads to a
more diverse range of final state distributions which requires a more
expressive neural network to be used compared with DTD. Direct comparison
between explicit QCT simulations, the STD model, and the widely used
Larsen-Borgnakke (LB) model shows that the STD model is quantitative whereas
the LB model is qualitative at best for rotational distributions $P(j')$ and
fails for vibrational distributions $P(v')$. As such the STD model can be
well-suited for simulating nonequilibrium high-speed flows, e.g., using the
direct simulation Monte Carlo method.
Related papers
- Normalizing Flow to Augmented Posterior: Conditional Density Estimation with Interpretable Dimension Reduction for High Dimensional Data [20.177824207096396]
Conditional density characterizes the distribution of a response variable $y$ given other predictor $x$.<n>In this work, we extend NF neural networks when external $x$ is present.<n>We show that an unconditional NF neural network, based on an unsupervised model of $z$, fails to generate interpretable results.
arXiv Detail & Related papers (2025-07-06T02:58:52Z) - Conditional Diffusion Models Based Conditional Independence Testing [8.34871567507739]
Conditional randomization test (CRT) was recently introduced to test whether two random variables, $X$ and $Y$, are conditionally independent.
We propose using conditional diffusion models (CDMs) to learn the distribution of $X|Z$.
arXiv Detail & Related papers (2024-12-16T13:03:18Z) - Non-asymptotic bounds for forward processes in denoising diffusions: Ornstein-Uhlenbeck is hard to beat [49.1574468325115]
This paper presents explicit non-asymptotic bounds on the forward diffusion error in total variation (TV)
We parametrise multi-modal data distributions in terms of the distance $R$ to their furthest modes and consider forward diffusions with additive and multiplicative noise.
arXiv Detail & Related papers (2024-08-25T10:28:31Z) - A Sharp Convergence Theory for The Probability Flow ODEs of Diffusion Models [45.60426164657739]
We develop non-asymptotic convergence theory for a diffusion-based sampler.
We prove that $d/varepsilon$ are sufficient to approximate the target distribution to within $varepsilon$ total-variation distance.
Our results also characterize how $ell$ score estimation errors affect the quality of the data generation processes.
arXiv Detail & Related papers (2024-08-05T09:02:24Z) - Practical and Asymptotically Exact Conditional Sampling in Diffusion Models [35.686996120862055]
A conditional generation method should provide exact samples for a broad range of conditional distributions without requiring task-specific training.
We introduce the Twisted Diffusion Sampler, or TDS, a sequential Monte Carlo algorithm that targets the conditional distributions of diffusion models through simulating a set of weighted particles.
On benchmark test cases, TDS allows flexible conditioning criteria and often outperforms the state of the art.
arXiv Detail & Related papers (2023-06-30T16:29:44Z) - Towards Faster Non-Asymptotic Convergence for Diffusion-Based Generative
Models [49.81937966106691]
We develop a suite of non-asymptotic theory towards understanding the data generation process of diffusion models.
In contrast to prior works, our theory is developed based on an elementary yet versatile non-asymptotic approach.
arXiv Detail & Related papers (2023-06-15T16:30:08Z) - Efficient Propagation of Uncertainty via Reordering Monte Carlo Samples [0.7087237546722617]
Uncertainty propagation is a technique to determine model output uncertainties based on the uncertainty in its input variables.
In this work, we investigate the hypothesis that while all samples are useful on average, some samples must be more useful than others.
We introduce a methodology to adaptively reorder MC samples and show how it results in reduction of computational expense of UP processes.
arXiv Detail & Related papers (2023-02-09T21:28:15Z) - CARD: Classification and Regression Diffusion Models [51.0421331214229]
We introduce classification and regression diffusion (CARD) models, which combine a conditional generative model and a pre-trained conditional mean estimator.
We demonstrate the outstanding ability of CARD in conditional distribution prediction with both toy examples and real-world datasets.
arXiv Detail & Related papers (2022-06-15T03:30:38Z) - Inverting brain grey matter models with likelihood-free inference: a
tool for trustable cytoarchitecture measurements [62.997667081978825]
characterisation of the brain grey matter cytoarchitecture with quantitative sensitivity to soma density and volume remains an unsolved challenge in dMRI.
We propose a new forward model, specifically a new system of equations, requiring a few relatively sparse b-shells.
We then apply modern tools from Bayesian analysis known as likelihood-free inference (LFI) to invert our proposed model.
arXiv Detail & Related papers (2021-11-15T09:08:27Z) - Generative Network-Based Reduced-Order Model for Prediction, Data
Assimilation and Uncertainty Quantification [0.0]
We propose a new method in which a generative network (GN) integrate into a reduced-order model (ROM) framework.
The aim is to match available measurements and estimate the corresponding uncertainties associated with the states and parameters of a physical simulation.
arXiv Detail & Related papers (2021-05-28T14:12:45Z) - Comparing Probability Distributions with Conditional Transport [63.11403041984197]
We propose conditional transport (CT) as a new divergence and approximate it with the amortized CT (ACT) cost.
ACT amortizes the computation of its conditional transport plans and comes with unbiased sample gradients that are straightforward to compute.
On a wide variety of benchmark datasets generative modeling, substituting the default statistical distance of an existing generative adversarial network with ACT is shown to consistently improve the performance.
arXiv Detail & Related papers (2020-12-28T05:14:22Z) - Sample Complexity of Asynchronous Q-Learning: Sharper Analysis and
Variance Reduction [63.41789556777387]
Asynchronous Q-learning aims to learn the optimal action-value function (or Q-function) of a Markov decision process (MDP)
We show that the number of samples needed to yield an entrywise $varepsilon$-accurate estimate of the Q-function is at most on the order of $frac1mu_min (1-gamma)5varepsilon2+ fract_mixmu_min (1-gamma)$ up to some logarithmic factor.
arXiv Detail & Related papers (2020-06-04T17:51:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.