Higher-Rank Irreducible Cartesian Tensors for Equivariant Message Passing
- URL: http://arxiv.org/abs/2405.14253v1
- Date: Thu, 23 May 2024 07:31:20 GMT
- Title: Higher-Rank Irreducible Cartesian Tensors for Equivariant Message Passing
- Authors: Viktor Zaverkin, Francesco Alesiani, Takashi Maruyama, Federico Errica, Henrik Christiansen, Makoto Takamoto, Nicolas Weber, Mathias Niepert,
- Abstract summary: Machine-learned interatomic potentials achieve accuracy on par with ab initio and first-principles methods.
We introduce higher-rank irreducible Cartesian tensors as an alternative to spherical tensors.
We consistently observe on-par or better performance than that of state-of-the-art spherical models.
- Score: 23.754664894759234
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The ability to perform fast and accurate atomistic simulations is crucial for advancing the chemical sciences. By learning from high-quality data, machine-learned interatomic potentials achieve accuracy on par with ab initio and first-principles methods at a fraction of their computational cost. The success of machine-learned interatomic potentials arises from integrating inductive biases such as equivariance to group actions on an atomic system, e.g., equivariance to rotations and reflections. In particular, the field has notably advanced with the emergence of equivariant message-passing architectures. Most of these models represent an atomic system using spherical tensors, tensor products of which require complicated numerical coefficients and can be computationally demanding. This work introduces higher-rank irreducible Cartesian tensors as an alternative to spherical tensors, addressing the above limitations. We integrate irreducible Cartesian tensor products into message-passing neural networks and prove the equivariance of the resulting layers. Through empirical evaluations on various benchmark data sets, we consistently observe on-par or better performance than that of state-of-the-art spherical models.
Related papers
- Scaling and renormalization in high-dimensional regression [72.59731158970894]
This paper presents a succinct derivation of the training and generalization performance of a variety of high-dimensional ridge regression models.
We provide an introduction and review of recent results on these topics, aimed at readers with backgrounds in physics and deep learning.
arXiv Detail & Related papers (2024-05-01T15:59:00Z) - TensorNet: Cartesian Tensor Representations for Efficient Learning of
Molecular Potentials [4.169915659794567]
We introduceNet, an innovative O(3)-equivariant message-passing neural network architecture.
By using tensor atomic embeddings, feature mixing is simplified through matrix product operations.
The accurate prediction of vector and tensor molecular quantities on top of potential energies and forces is possible.
arXiv Detail & Related papers (2023-06-10T16:41:18Z) - Capturing dynamical correlations using implicit neural representations [85.66456606776552]
We develop an artificial intelligence framework which combines a neural network trained to mimic simulated data from a model Hamiltonian with automatic differentiation to recover unknown parameters from experimental data.
In doing so, we illustrate the ability to build and train a differentiable model only once, which then can be applied in real-time to multi-dimensional scattering data.
arXiv Detail & Related papers (2023-04-08T07:55:36Z) - Wigner kernels: body-ordered equivariant machine learning without a
basis [0.0]
We propose a novel density-based method which involves computing Wigner kernels''
Wigner kernels are fully equivariant and body-ordered kernels that can be computed iteratively with a cost that is independent of the radial-chemical basis.
We present several examples of the accuracy of models based on Wigner kernels in chemical applications.
arXiv Detail & Related papers (2023-03-07T18:34:55Z) - ER: Equivariance Regularizer for Knowledge Graph Completion [107.51609402963072]
We propose a new regularizer, namely, Equivariance Regularizer (ER)
ER can enhance the generalization ability of the model by employing the semantic equivariance between the head and tail entities.
The experimental results indicate a clear and substantial improvement over the state-of-the-art relation prediction methods.
arXiv Detail & Related papers (2022-06-24T08:18:05Z) - Learning Local Equivariant Representations for Large-Scale Atomistic
Dynamics [0.6861083714313458]
Allegro is a strictly local equivariant deep learning interatomic potential.
It simultaneously exhibits excellent accuracy and scalability of parallel computation.
A single tensor product layer is shown to outperform existing deep message passing neural networks and transformers.
arXiv Detail & Related papers (2022-04-11T16:48:41Z) - GeoDiff: a Geometric Diffusion Model for Molecular Conformation
Generation [102.85440102147267]
We propose a novel generative model named GeoDiff for molecular conformation prediction.
We show that GeoDiff is superior or comparable to existing state-of-the-art approaches.
arXiv Detail & Related papers (2022-03-06T09:47:01Z) - Equivariant vector field network for many-body system modeling [65.22203086172019]
Equivariant Vector Field Network (EVFN) is built on a novel equivariant basis and the associated scalarization and vectorization layers.
We evaluate our method on predicting trajectories of simulated Newton mechanics systems with both full and partially observed data.
arXiv Detail & Related papers (2021-10-26T14:26:25Z) - Gaussian Moments as Physically Inspired Molecular Descriptors for
Accurate and Scalable Machine Learning Potentials [0.0]
We propose a machine learning method for constructing high-dimensional potential energy surfaces based on feed-forward neural networks.
The accuracy of the developed approach in representing both chemical and configurational spaces is comparable to the one of several established machine learning models.
arXiv Detail & Related papers (2021-09-15T16:46:46Z) - Multiplicative noise and heavy tails in stochastic optimization [62.993432503309485]
empirical optimization is central to modern machine learning, but its role in its success is still unclear.
We show that it commonly arises in parameters of discrete multiplicative noise due to variance.
A detailed analysis is conducted in which we describe on key factors, including recent step size, and data, all exhibit similar results on state-of-the-art neural network models.
arXiv Detail & Related papers (2020-06-11T09:58:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.