DeePKS+ABACUS as a Bridge between Expensive Quantum Mechanical Models
and Machine Learning Potentials
- URL: http://arxiv.org/abs/2206.10093v1
- Date: Tue, 21 Jun 2022 03:24:18 GMT
- Title: DeePKS+ABACUS as a Bridge between Expensive Quantum Mechanical Models
and Machine Learning Potentials
- Authors: Wenfei Li, Qi Ou, Yixiao Chen, Yu Cao, Renxi Liu, Chunyi Zhang, Daye
Zheng, Chun Cai, Xifan Wu, Han Wang, Mohan Chen, Linfeng Zhang
- Abstract summary: Deep Kohn-Sham (DeePKS) is a machine learning (ML) potential based on density functional theory (DFT)
DeePKS offers closely-matched energies and forces compared with high-level quantum mechanical (QM) method.
One can generate a decent amount of high-accuracy QM data to train a DeePKS model, and then use the DeePKS model to label a much larger amount of configurations to train a ML potential.
- Score: 9.982820888454958
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, the development of machine learning (ML) potentials has made it
possible to perform large-scale and long-time molecular simulations with the
accuracy of quantum mechanical (QM) models. However, for high-level QM methods,
such as density functional theory (DFT) at the meta-GGA level and/or with exact
exchange, quantum Monte Carlo, etc., generating a sufficient amount of data for
training a ML potential has remained computationally challenging due to their
high cost. In this work, we demonstrate that this issue can be largely
alleviated with Deep Kohn-Sham (DeePKS), a ML-based DFT model. DeePKS employs a
computationally efficient neural network-based functional model to construct a
correction term added upon a cheap DFT model. Upon training, DeePKS offers
closely-matched energies and forces compared with high-level QM method, but the
number of training data required is orders of magnitude less than that required
for training a reliable ML potential. As such, DeePKS can serve as a bridge
between expensive QM models and ML potentials: one can generate a decent amount
of high-accuracy QM data to train a DeePKS model, and then use the DeePKS model
to label a much larger amount of configurations to train a ML potential. This
scheme for periodic systems is implemented in a DFT package ABACUS, which is
open-source and ready for use in various applications.
Related papers
- Multi-task learning for molecular electronic structure approaching coupled-cluster accuracy [9.81014501502049]
We develop a unified machine learning method for electronic structures of organic molecules using the gold-standard CCSD(T) calculations as training data.
Tested on hydrocarbon molecules, our model outperforms DFT with the widely-used hybrid and double hybrid functionals in computational costs and prediction accuracy of various quantum chemical properties.
arXiv Detail & Related papers (2024-05-09T19:51:27Z) - On Optimizing Hyperparameters for Quantum Neural Networks [0.5999777817331317]
Current state-of-the-art Machine Learning models require weeks for training, which is associated with an enormous $CO$ footprint.
Quantum Computing, and specifically Quantum Machine Learning (QML), can offer significant theoretical speed-ups and enhanced power.
arXiv Detail & Related papers (2024-03-27T13:59:09Z) - QKSAN: A Quantum Kernel Self-Attention Network [53.96779043113156]
A Quantum Kernel Self-Attention Mechanism (QKSAM) is introduced to combine the data representation merit of Quantum Kernel Methods (QKM) with the efficient information extraction capability of SAM.
A Quantum Kernel Self-Attention Network (QKSAN) framework is proposed based on QKSAM, which ingeniously incorporates the Deferred Measurement Principle (DMP) and conditional measurement techniques.
Four QKSAN sub-models are deployed on PennyLane and IBM Qiskit platforms to perform binary classification on MNIST and Fashion MNIST.
arXiv Detail & Related papers (2023-08-25T15:08:19Z) - Electronic Structure Prediction of Multi-million Atom Systems Through Uncertainty Quantification Enabled Transfer Learning [5.4875371069660925]
Ground state electron density -- obtainable using Kohn-Sham Density Functional Theory (KS-DFT) simulations -- contains a wealth of material information.
However, the computational expense of KS-DFT scales cubically with system size which tends to stymie training data generation.
Here, we address this fundamental challenge by employing transfer learning to leverage the multi-scale nature of the training data.
arXiv Detail & Related papers (2023-08-24T21:41:29Z) - Classical-to-Quantum Transfer Learning Facilitates Machine Learning with Variational Quantum Circuit [62.55763504085508]
We prove that a classical-to-quantum transfer learning architecture using a Variational Quantum Circuit (VQC) improves the representation and generalization (estimation error) capabilities of the VQC model.
We show that the architecture of classical-to-quantum transfer learning leverages pre-trained classical generative AI models, making it easier to find the optimal parameters for the VQC in the training stage.
arXiv Detail & Related papers (2023-05-18T03:08:18Z) - NeuralNEB -- Neural Networks can find Reaction Paths Fast [7.7365628406567675]
Quantum mechanical methods like Density Functional Theory (DFT) are used with great success alongside efficient search algorithms for studying kinetics of reactive systems.
Machine Learning (ML) models have turned out to be excellent emulators of small molecule DFT calculations and could possibly replace DFT in such tasks.
In this paper we train state of the art equivariant Graph Neural Network (GNN)-based models on around 10.000 elementary reactions from the Transition1x dataset.
arXiv Detail & Related papers (2022-07-20T15:29:45Z) - QSAN: A Near-term Achievable Quantum Self-Attention Network [73.15524926159702]
Self-Attention Mechanism (SAM) is good at capturing the internal connections of features.
A novel Quantum Self-Attention Network (QSAN) is proposed for image classification tasks on near-term quantum devices.
arXiv Detail & Related papers (2022-07-14T12:22:51Z) - Mixed Precision Low-bit Quantization of Neural Network Language Models
for Speech Recognition [67.95996816744251]
State-of-the-art language models (LMs) represented by long-short term memory recurrent neural networks (LSTM-RNNs) and Transformers are becoming increasingly complex and expensive for practical applications.
Current quantization methods are based on uniform precision and fail to account for the varying performance sensitivity at different parts of LMs to quantization errors.
Novel mixed precision neural network LM quantization methods are proposed in this paper.
arXiv Detail & Related papers (2021-11-29T12:24:02Z) - Subtleties in the trainability of quantum machine learning models [0.0]
We show that gradient scaling results for Variational Quantum Algorithms can be applied to study the gradient scaling of Quantum Machine Learning models.
Our results indicate that features deemed detrimental for VQA trainability can also lead to issues such as barren plateaus in QML.
arXiv Detail & Related papers (2021-10-27T20:28:53Z) - Quantum-tailored machine-learning characterization of a superconducting
qubit [50.591267188664666]
We develop an approach to characterize the dynamics of a quantum device and learn device parameters.
This approach outperforms physics-agnostic recurrent neural networks trained on numerically generated and experimental data.
This demonstration shows how leveraging domain knowledge improves the accuracy and efficiency of this characterization task.
arXiv Detail & Related papers (2021-06-24T15:58:57Z) - Transfer Learning without Knowing: Reprogramming Black-box Machine
Learning Models with Scarce Data and Limited Resources [78.72922528736011]
We propose a novel approach, black-box adversarial reprogramming (BAR), that repurposes a well-trained black-box machine learning model.
Using zeroth order optimization and multi-label mapping techniques, BAR can reprogram a black-box ML model solely based on its input-output responses.
BAR outperforms state-of-the-art methods and yields comparable performance to the vanilla adversarial reprogramming method.
arXiv Detail & Related papers (2020-07-17T01:52:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.