Molecular relaxation by reverse diffusion with time step prediction
- URL: http://arxiv.org/abs/2404.10935v2
- Date: Sat, 3 Aug 2024 13:26:31 GMT
- Title: Molecular relaxation by reverse diffusion with time step prediction
- Authors: Khaled Kahouli, Stefaan Simon Pierre Hessmann, Klaus-Robert Müller, Shinichi Nakajima, Stefan Gugler, Niklas Wolf Andreas Gebauer,
- Abstract summary: We propose MoreRed, molecular relaxation by reverse diffusion.
MoreRed learns a simpler pseudo potential energy surface (PES) instead of the complex physical PES.
We evaluate the root-mean-square deviation between the found equilibrium structures and the reference equilibrium structures as well as their energies.
- Score: 13.834005606387706
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Molecular relaxation, finding the equilibrium state of a non-equilibrium structure, is an essential component of computational chemistry to understand reactivity. Classical force field (FF) methods often rely on insufficient local energy minimization, while neural network FF models require large labeled datasets encompassing both equilibrium and non-equilibrium structures. As a remedy, we propose MoreRed, molecular relaxation by reverse diffusion, a conceptually novel and purely statistical approach where non-equilibrium structures are treated as noisy instances of their corresponding equilibrium states. To enable the denoising of arbitrarily noisy inputs via a generative diffusion model, we further introduce a novel diffusion time step predictor. Notably, MoreRed learns a simpler pseudo potential energy surface (PES) instead of the complex physical PES. It is trained on a significantly smaller, and thus computationally cheaper, dataset consisting of solely unlabeled equilibrium structures, avoiding the computation of non-equilibrium structures altogether. We compare MoreRed to classical FFs, equivariant neural network FFs trained on a large dataset of equilibrium and non-equilibrium data, as well as a semi-empirical tight-binding model. To assess this quantitatively, we evaluate the root-mean-square deviation between the found equilibrium structures and the reference equilibrium structures as well as their energies.
Related papers
- Neural equilibria for long-term prediction of nonlinear conservation laws [38.88412478541979]
We introduce Neural Discrete Equilibrium (NeurDE), a machine learning (ML) approach for long-term forecasting of flow phenomena.
We show that NeurDE enables accurate prediction of compressible flows, including supersonic flows, while tracking shocks over hundreds of time steps.
arXiv Detail & Related papers (2025-01-12T21:02:20Z) - Model-free learning of probability flows: Elucidating the nonequilibrium dynamics of flocking [15.238808518078567]
High dimensionality of the phase space renders traditional computational techniques infeasible for estimating the entropy production rate.
We derive a new physical connection between the probability current and two local definitions of the EPR for inertial systems.
Our results highlight that entropy is consumed on the spatial interface of a flock as the interplay between alignment and fluctuation dynamically creates and annihilates order.
arXiv Detail & Related papers (2024-11-21T17:08:06Z) - Tight Stability, Convergence, and Robustness Bounds for Predictive Coding Networks [60.3634789164648]
Energy-based learning algorithms, such as predictive coding (PC), have garnered significant attention in the machine learning community.
We rigorously analyze the stability, robustness, and convergence of PC through the lens of dynamical systems theory.
arXiv Detail & Related papers (2024-10-07T02:57:26Z) - Neural force functional for non-equilibrium many-body colloidal systems [0.20971479389679337]
We combine power functional theory and machine learning to study non-equilibrium overdamped many-body systems of colloidal particles.
We first sample in steady state the one-body fields relevant for the dynamics from computer simulations of Brownian particles.
A neural network is then trained with this data to represent locally in space the formally exact functional mapping from the one-body density and velocity profiles to the one-body internal force field.
arXiv Detail & Related papers (2024-06-05T19:57:23Z) - Improving equilibrium propagation without weight symmetry through Jacobian homeostasis [7.573586022424398]
Equilibrium propagation (EP) is a compelling alternative to the backpropagation of error algorithm (BP)
EP requires weight symmetry and infinitesimal equilibrium perturbations, i.e., nudges, to estimate unbiased gradients efficiently.
We show that the finite nudge does not pose a problem, as exact derivatives can still be estimated via a Cauchy integral.
We present a new homeostatic objective that directly mitigates functional asymmetries of the Jacobian at the network's fixed point.
arXiv Detail & Related papers (2023-09-05T13:20:43Z) - Towards Predicting Equilibrium Distributions for Molecular Systems with
Deep Learning [60.02391969049972]
We introduce a novel deep learning framework, called Distributional Graphormer (DiG), in an attempt to predict the equilibrium distribution of molecular systems.
DiG employs deep neural networks to transform a simple distribution towards the equilibrium distribution, conditioned on a descriptor of a molecular system.
arXiv Detail & Related papers (2023-06-08T17:12:08Z) - Machine learning in and out of equilibrium [58.88325379746631]
Our study uses a Fokker-Planck approach, adapted from statistical physics, to explore these parallels.
We focus in particular on the stationary state of the system in the long-time limit, which in conventional SGD is out of equilibrium.
We propose a new variation of Langevin dynamics (SGLD) that harnesses without replacement minibatching.
arXiv Detail & Related papers (2023-06-06T09:12:49Z) - Global Convergence of Over-parameterized Deep Equilibrium Models [52.65330015267245]
A deep equilibrium model (DEQ) is implicitly defined through an equilibrium point of an infinite-depth weight-tied model with an input-injection.
Instead of infinite computations, it solves an equilibrium point directly with root-finding and computes gradients with implicit differentiation.
We propose a novel probabilistic framework to overcome the technical difficulty in the non-asymptotic analysis of infinite-depth weight-tied models.
arXiv Detail & Related papers (2022-05-27T08:00:13Z) - Multiplicative noise and heavy tails in stochastic optimization [62.993432503309485]
empirical optimization is central to modern machine learning, but its role in its success is still unclear.
We show that it commonly arises in parameters of discrete multiplicative noise due to variance.
A detailed analysis is conducted in which we describe on key factors, including recent step size, and data, all exhibit similar results on state-of-the-art neural network models.
arXiv Detail & Related papers (2020-06-11T09:58:01Z) - Parsimonious neural networks learn interpretable physical laws [77.34726150561087]
We propose parsimonious neural networks (PNNs) that combine neural networks with evolutionary optimization to find models that balance accuracy with parsimony.
The power and versatility of the approach is demonstrated by developing models for classical mechanics and to predict the melting temperature of materials from fundamental properties.
arXiv Detail & Related papers (2020-05-08T16:15:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.