The Role of Reference Points in Machine-Learned Atomistic Simulation
Models
- URL: http://arxiv.org/abs/2310.18552v1
- Date: Sat, 28 Oct 2023 01:02:14 GMT
- Title: The Role of Reference Points in Machine-Learned Atomistic Simulation
Models
- Authors: Xiangyun Lei, Weike Ye, Joseph Montoya, Tim Mueller, Linda Hung, Jens
Hummelshoej
- Abstract summary: Chemical Environment Modeling Theory (CEMT) is designed to overcome the limitations inherent in traditional atom-centered Machine Learning Force Field (MLFF) models.
It allows the leveraging of spatially-resolved energy densities and charge densities from FE-DFT calculations.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This paper introduces the Chemical Environment Modeling Theory (CEMT), a
novel, generalized framework designed to overcome the limitations inherent in
traditional atom-centered Machine Learning Force Field (MLFF) models, widely
used in atomistic simulations of chemical systems. CEMT demonstrated enhanced
flexibility and adaptability by allowing reference points to exist anywhere
within the modeled domain and thus, enabling the study of various model
architectures. Utilizing Gaussian Multipole (GMP) featurization functions,
several models with different reference point sets, including finite difference
grid-centered and bond-centered models, were tested to analyze the variance in
capabilities intrinsic to models built on distinct reference points. The
results underscore the potential of non-atom-centered reference points in force
training, revealing variations in prediction accuracy, inference speed and
learning efficiency. Finally, a unique connection between CEMT and real-space
orbital-free finite element Density Functional Theory (FE-DFT) is established,
and the implications include the enhancement of data efficiency and robustness.
It allows the leveraging of spatially-resolved energy densities and charge
densities from FE-DFT calculations, as well as serving as a pivotal step
towards integrating known quantum-mechanical laws into the architecture of ML
models.
Related papers
- Latent Space Energy-based Neural ODEs [73.01344439786524]
This paper introduces a novel family of deep dynamical models designed to represent continuous-time sequence data.
We train the model using maximum likelihood estimation with Markov chain Monte Carlo.
Experiments on oscillating systems, videos and real-world state sequences (MuJoCo) illustrate that ODEs with the learnable energy-based prior outperform existing counterparts.
arXiv Detail & Related papers (2024-09-05T18:14:22Z) - Differentiable Neural-Integrated Meshfree Method for Forward and Inverse Modeling of Finite Strain Hyperelasticity [1.290382979353427]
The present study aims to extend the novel physics-informed machine learning approach, specifically the neural-integrated meshfree (NIM) method, to model finite-strain problems.
Thanks to the inherent differentiable programming capabilities, NIM can circumvent the need for derivation of Newton-Raphson linearization of the variational form.
NIM is applied to identify heterogeneous mechanical properties of hyperelastic materials from strain data, validating its effectiveness in the inverse modeling of nonlinear materials.
arXiv Detail & Related papers (2024-07-15T19:15:18Z) - Interpolation and differentiation of alchemical degrees of freedom in machine learning interatomic potentials [1.1016723046079784]
We report the use of continuous and differentiable alchemical degrees of freedom in atomistic materials simulations.
The proposed method introduces alchemical atoms with corresponding weights into the input graph, alongside modifications to the message-passing and readout mechanisms of MLIPs.
The end-to-end differentiability of MLIPs enables efficient calculation of the gradient of energy with respect to the compositional weights.
arXiv Detail & Related papers (2024-04-16T17:24:22Z) - CoCoGen: Physically-Consistent and Conditioned Score-based Generative Models for Forward and Inverse Problems [1.0923877073891446]
This work extends the reach of generative models into physical problem domains.
We present an efficient approach to promote consistency with the underlying PDE.
We showcase the potential and versatility of score-based generative models in various physics tasks.
arXiv Detail & Related papers (2023-12-16T19:56:10Z) - Discovering Interpretable Physical Models using Symbolic Regression and
Discrete Exterior Calculus [55.2480439325792]
We propose a framework that combines Symbolic Regression (SR) and Discrete Exterior Calculus (DEC) for the automated discovery of physical models.
DEC provides building blocks for the discrete analogue of field theories, which are beyond the state-of-the-art applications of SR to physical problems.
We prove the effectiveness of our methodology by re-discovering three models of Continuum Physics from synthetic experimental data.
arXiv Detail & Related papers (2023-10-10T13:23:05Z) - Descriptors for Machine Learning Model of Generalized Force Field in
Condensed Matter Systems [3.9811842769009034]
We outline the general framework of machine learning (ML) methods for multi-scale dynamical modeling of condensed matter systems.
We focus on the group-theoretical method that offers a systematic and rigorous approach to compute invariants based on the bispectrum coefficients.
arXiv Detail & Related papers (2022-01-03T18:38:26Z) - Pseudo-Spherical Contrastive Divergence [119.28384561517292]
We propose pseudo-spherical contrastive divergence (PS-CD) to generalize maximum learning likelihood of energy-based models.
PS-CD avoids the intractable partition function and provides a generalized family of learning objectives.
arXiv Detail & Related papers (2021-11-01T09:17:15Z) - Quaternion Factorization Machines: A Lightweight Solution to Intricate
Feature Interaction Modelling [76.89779231460193]
factorization machine (FM) is capable of automatically learning high-order interactions among features to make predictions without the need for manual feature engineering.
We propose the quaternion factorization machine (QFM) and quaternion neural factorization machine (QNFM) for sparse predictive analytics.
arXiv Detail & Related papers (2021-04-05T00:02:36Z) - Multi-task learning for electronic structure to predict and explore
molecular potential energy surfaces [39.228041052681526]
We refine the OrbNet model to accurately predict energy, forces, and other response properties for molecules.
The model is end-to-end differentiable due to the derivation of analytic gradients for all electronic structure terms.
It is shown to be transferable across chemical space due to the use of domain-specific features.
arXiv Detail & Related papers (2020-11-05T06:48:46Z) - Tensor network models of AdS/qCFT [69.6561021616688]
We introduce the notion of a quasiperiodic conformal field theory (qCFT)
We show that qCFT can be best understood as belonging to a paradigm of discrete holography.
arXiv Detail & Related papers (2020-04-08T18:00:05Z) - Training Deep Energy-Based Models with f-Divergence Minimization [113.97274898282343]
Deep energy-based models (EBMs) are very flexible in distribution parametrization but computationally challenging.
We propose a general variational framework termed f-EBM to train EBMs using any desired f-divergence.
Experimental results demonstrate the superiority of f-EBM over contrastive divergence, as well as the benefits of training EBMs using f-divergences other than KL.
arXiv Detail & Related papers (2020-03-06T23:11:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.