Spline-based neural network interatomic potentials: blending classical
and machine learning models
- URL: http://arxiv.org/abs/2310.02904v1
- Date: Wed, 4 Oct 2023 15:42:26 GMT
- Title: Spline-based neural network interatomic potentials: blending classical
and machine learning models
- Authors: Joshua A. Vita, Dallas R. Trinkle
- Abstract summary: We introduce a new MLIP framework which blends the simplicity of spline-based MEAM potentials with the flexibility of a neural network architecture.
We demonstrate how this framework can be used to probe the boundary between classical and ML IPs.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: While machine learning (ML) interatomic potentials (IPs) are able to achieve
accuracies nearing the level of noise inherent in the first-principles data to
which they are trained, it remains to be shown if their increased complexities
are strictly necessary for constructing high-quality IPs. In this work, we
introduce a new MLIP framework which blends the simplicity of spline-based MEAM
(s-MEAM) potentials with the flexibility of a neural network (NN) architecture.
The proposed framework, which we call the spline-based neural network potential
(s-NNP), is a simplified version of the traditional NNP that can be used to
describe complex datasets in a computationally efficient manner. We demonstrate
how this framework can be used to probe the boundary between classical and ML
IPs, highlighting the benefits of key architectural changes. Furthermore, we
show that using spline filters for encoding atomic environments results in a
readily interpreted embedding layer which can be coupled with modifications to
the NN to incorporate expected physical behaviors and improve overall
interpretability. Finally, we test the flexibility of the spline filters,
observing that they can be shared across multiple chemical systems in order to
provide a convenient reference point from which to begin performing
cross-system analyses.
Related papers
- Parallel Proportional Fusion of Spiking Quantum Neural Network for Optimizing Image Classification [10.069224006497162]
We introduce a novel architecture termed Parallel Proportional Fusion of Quantum and Spiking Neural Networks (PPF-QSNN)
The proposed PPF-QSNN outperforms both the existing spiking neural network and the serial quantum neural network across metrics such as accuracy, loss, and robustness.
This study lays the groundwork for the advancement and application of quantum advantage in artificial intelligent computations.
arXiv Detail & Related papers (2024-04-01T10:35:35Z) - SpikingJelly: An open-source machine learning infrastructure platform
for spike-based intelligence [51.6943465041708]
Spiking neural networks (SNNs) aim to realize brain-inspired intelligence on neuromorphic chips with high energy efficiency.
We contribute a full-stack toolkit for pre-processing neuromorphic datasets, building deep SNNs, optimizing their parameters, and deploying SNNs on neuromorphic chips.
arXiv Detail & Related papers (2023-10-25T13:15:17Z) - Equivariant Matrix Function Neural Networks [1.8717045355288808]
We introduce Matrix Function Neural Networks (MFNs), a novel architecture that parameterizes non-local interactions through analytic matrix equivariant functions.
MFNs is able to capture intricate non-local interactions in quantum systems, paving the way to new state-of-the-art force fields.
arXiv Detail & Related papers (2023-10-16T14:17:00Z) - Scalable Nanophotonic-Electronic Spiking Neural Networks [3.9918594409417576]
Spiking neural networks (SNN) provide a new computational paradigm capable of highly parallelized, real-time processing.
Photonic devices are ideal for the design of high-bandwidth, parallel architectures matching the SNN computational paradigm.
Co-integrated CMOS and SiPh technologies are well-suited to the design of scalable SNN computing architectures.
arXiv Detail & Related papers (2022-08-28T06:10:06Z) - Deep Architecture Connectivity Matters for Its Convergence: A
Fine-Grained Analysis [94.64007376939735]
We theoretically characterize the impact of connectivity patterns on the convergence of deep neural networks (DNNs) under gradient descent training.
We show that by a simple filtration on "unpromising" connectivity patterns, we can trim down the number of models to evaluate.
arXiv Detail & Related papers (2022-05-11T17:43:54Z) - Generalized Approach to Matched Filtering using Neural Networks [4.535489275919893]
We make a key observation on the relationship between the emerging deep learning and the traditional techniques.
matched filtering is formally equivalent to a particular neural network.
We show that the proposed neural network architecture can outperform matched filtering.
arXiv Detail & Related papers (2021-04-08T17:59:07Z) - PredRNN: A Recurrent Neural Network for Spatiotemporal Predictive
Learning [109.84770951839289]
We present PredRNN, a new recurrent network for learning visual dynamics from historical context.
We show that our approach obtains highly competitive results on three standard datasets.
arXiv Detail & Related papers (2021-03-17T08:28:30Z) - NSL: Hybrid Interpretable Learning From Noisy Raw Data [66.15862011405882]
This paper introduces a hybrid neural-symbolic learning framework, called NSL, that learns interpretable rules from labelled unstructured data.
NSL combines pre-trained neural networks for feature extraction with FastLAS, a state-of-the-art ILP system for rule learning under the answer set semantics.
We demonstrate that NSL is able to learn robust rules from MNIST data and achieve comparable or superior accuracy when compared to neural network and random forest baselines.
arXiv Detail & Related papers (2020-12-09T13:02:44Z) - Modeling from Features: a Mean-field Framework for Over-parameterized
Deep Neural Networks [54.27962244835622]
This paper proposes a new mean-field framework for over- parameterized deep neural networks (DNNs)
In this framework, a DNN is represented by probability measures and functions over its features in the continuous limit.
We illustrate the framework via the standard DNN and the Residual Network (Res-Net) architectures.
arXiv Detail & Related papers (2020-07-03T01:37:16Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - Eigen component analysis: A quantum theory incorporated machine learning
technique to find linearly maximum separable components [0.0]
In quantum mechanics, a state is the superposition of multiple eigenstates.
We propose eigen component analysis (ECA), an interpretable linear learning model.
ECA incorporates the principle of quantum mechanics into the design of algorithm design for feature extraction, classification, dictionary and deep learning.
arXiv Detail & Related papers (2020-03-23T12:02:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.