The Bigger the Better? Accurate Molecular Potential Energy Surfaces from Minimalist Neural Networks
- URL: http://arxiv.org/abs/2411.18121v1
- Date: Wed, 27 Nov 2024 08:01:21 GMT
- Title: The Bigger the Better? Accurate Molecular Potential Energy Surfaces from Minimalist Neural Networks
- Authors: Silvan Käser, Debasish Koner, Markus Meuwly,
- Abstract summary: KerNN is a combined kernel/neural network-based approach to represent molecular PESs.
Compared to state-of-the-art neural network PESs the number of learnable parameters of KerNN is significantly reduced.
KerNN shows excellent performance on test set statistics and observables including vibrational bands computed from classical and quantum simulations.
- Score: 0.0
- License:
- Abstract: Atomistic simulations are a powerful tool for studying the dynamics of molecules, proteins, and materials on wide time and length scales. Their reliability and predictiveness, however, depend directly on the accuracy of the underlying potential energy surface (PES). Guided by the principle of parsimony this work introduces KerNN, a combined kernel/neural network-based approach to represent molecular PESs. Compared to state-of-the-art neural network PESs the number of learnable parameters of KerNN is significantly reduced. This speeds up training and evaluation times by several orders of magnitude while retaining high prediction accuracy. Importantly, using kernels as the features also improves the extrapolation capabilities of KerNN far beyond the coverage provided by the training data which solves a general problem of NN-based PESs. KerNN applied to spectroscopy and reaction dynamics shows excellent performance on test set statistics and observables including vibrational bands computed from classical and quantum simulations.
Related papers
- Fourier Neural Operators for Learning Dynamics in Quantum Spin Systems [77.88054335119074]
We use FNOs to model the evolution of random quantum spin systems.
We apply FNOs to a compact set of Hamiltonian observables instead of the entire $2n$ quantum wavefunction.
arXiv Detail & Related papers (2024-09-05T07:18:09Z) - Speed Limits for Deep Learning [67.69149326107103]
Recent advancement in thermodynamics allows bounding the speed at which one can go from the initial weight distribution to the final distribution of the fully trained network.
We provide analytical expressions for these speed limits for linear and linearizable neural networks.
Remarkably, given some plausible scaling assumptions on the NTK spectra and spectral decomposition of the labels -- learning is optimal in a scaling sense.
arXiv Detail & Related papers (2023-07-27T06:59:46Z) - Denoise Pretraining on Nonequilibrium Molecules for Accurate and
Transferable Neural Potentials [8.048439531116367]
We propose denoise pretraining on nonequilibrium molecular conformations to achieve more accurate and transferable GNN potential predictions.
Our models pretrained on small molecules demonstrate remarkable transferability, improving performance when fine-tuned on diverse molecular systems.
arXiv Detail & Related papers (2023-03-03T21:15:22Z) - Extrapolation and Spectral Bias of Neural Nets with Hadamard Product: a
Polynomial Net Study [55.12108376616355]
The study on NTK has been devoted to typical neural network architectures, but is incomplete for neural networks with Hadamard products (NNs-Hp)
In this work, we derive the finite-width-K formulation for a special class of NNs-Hp, i.e., neural networks.
We prove their equivalence to the kernel regression predictor with the associated NTK, which expands the application scope of NTK.
arXiv Detail & Related papers (2022-09-16T06:36:06Z) - Exploring accurate potential energy surfaces via integrating variational
quantum eigensovler with machine learning [8.19234058079321]
We show in this work that variational quantum algorithms can be integrated with machine learning (ML) techniques.
We encode the molecular geometry information into a deep neural network (DNN) for representing parameters of the variational quantum eigensolver (VQE)
arXiv Detail & Related papers (2022-06-08T01:43:56Z) - Enhanced physics-constrained deep neural networks for modeling vanadium
redox flow battery [62.997667081978825]
We propose an enhanced version of the physics-constrained deep neural network (PCDNN) approach to provide high-accuracy voltage predictions.
The ePCDNN can accurately capture the voltage response throughout the charge--discharge cycle, including the tail region of the voltage discharge curve.
arXiv Detail & Related papers (2022-03-03T19:56:24Z) - The Spectral Bias of Polynomial Neural Networks [63.27903166253743]
Polynomial neural networks (PNNs) have been shown to be particularly effective at image generation and face recognition, where high-frequency information is critical.
Previous studies have revealed that neural networks demonstrate a $textitspectral bias$ towards low-frequency functions, which yields faster learning of low-frequency components during training.
Inspired by such studies, we conduct a spectral analysis of the Tangent Kernel (NTK) of PNNs.
We find that the $Pi$-Net family, i.e., a recently proposed parametrization of PNNs, speeds up the
arXiv Detail & Related papers (2022-02-27T23:12:43Z) - Physics-enhanced Neural Networks in the Small Data Regime [0.0]
We show that by considering the actual energy level as a regularization term during training, the results can be further improved.
Especially in the case where only small amounts of data are available, these improvements can significantly enhance the predictive capability.
arXiv Detail & Related papers (2021-11-19T17:21:14Z) - Fast and Sample-Efficient Interatomic Neural Network Potentials for
Molecules and Materials Based on Gaussian Moments [3.1829446824051195]
We present an improved NN architecture based on the previous GM-NN model.
The improved methodology is a pre-requisite for training-heavy such as active learning or learning-on-the-fly.
arXiv Detail & Related papers (2021-09-20T14:23:34Z) - Parsimonious neural networks learn interpretable physical laws [77.34726150561087]
We propose parsimonious neural networks (PNNs) that combine neural networks with evolutionary optimization to find models that balance accuracy with parsimony.
The power and versatility of the approach is demonstrated by developing models for classical mechanics and to predict the melting temperature of materials from fundamental properties.
arXiv Detail & Related papers (2020-05-08T16:15:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.