Efficient and Equivariant Graph Networks for Predicting Quantum
Hamiltonian
- URL: http://arxiv.org/abs/2306.04922v2
- Date: Wed, 8 Nov 2023 17:43:17 GMT
- Title: Efficient and Equivariant Graph Networks for Predicting Quantum
Hamiltonian
- Authors: Haiyang Yu, Zhao Xu, Xiaofeng Qian, Xiaoning Qian, Shuiwang Ji
- Abstract summary: We propose a SE(3)-equivariant network, named QHNet, that achieves efficiency and equivariance.
Our key advance lies at the innovative design of QHNet architecture, which not only obeys the underlying symmetries, but also enables the reduction of number of tensor products by 92%.
Experimental results show that our QHNet can achieve comparable performance to the state of the art methods at a significantly faster speed.
- Score: 72.57870177599492
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the prediction of the Hamiltonian matrix, which finds use in
quantum chemistry and condensed matter physics. Efficiency and equivariance are
two important, but conflicting factors. In this work, we propose a
SE(3)-equivariant network, named QHNet, that achieves efficiency and
equivariance. Our key advance lies at the innovative design of QHNet
architecture, which not only obeys the underlying symmetries, but also enables
the reduction of number of tensor products by 92\%. In addition, QHNet prevents
the exponential growth of channel dimension when more atom types are involved.
We perform experiments on MD17 datasets, including four molecular systems.
Experimental results show that our QHNet can achieve comparable performance to
the state of the art methods at a significantly faster speed. Besides, our
QHNet consumes 50\% less memory due to its streamlined architecture. Our code
is publicly available as part of the AIRS library
(\url{https://github.com/divelab/AIRS}).
Related papers
- GHN-Q: Parameter Prediction for Unseen Quantized Convolutional
Architectures via Graph Hypernetworks [80.29667394618625]
We conduct the first-ever study exploring the use of graph hypernetworks for predicting parameters of unseen quantized CNN architectures.
We focus on a reduced CNN search space and find that GHN-Q can in fact predict quantization-robust parameters for various 8-bit quantized CNNs.
arXiv Detail & Related papers (2022-08-26T08:00:02Z) - ForceNet: A Graph Neural Network for Large-Scale Quantum Calculations [86.41674945012369]
We develop a scalable and expressive Graph Neural Networks model, ForceNet, to approximate atomic forces.
Our proposed ForceNet is able to predict atomic forces more accurately than state-of-the-art physics-based GNNs.
arXiv Detail & Related papers (2021-03-02T03:09:06Z) - Absence of Barren Plateaus in Quantum Convolutional Neural Networks [0.0]
Quantum Convolutional Neural Networks (QCNNs) have been proposed.
We rigorously analyze the gradient scaling for the parameters in the QCNN architecture.
arXiv Detail & Related papers (2020-11-05T16:46:13Z) - Once Quantization-Aware Training: High Performance Extremely Low-bit
Architecture Search [112.05977301976613]
We propose to combine Network Architecture Search methods with quantization to enjoy the merits of the two sides.
We first propose the joint training of architecture and quantization with a shared step size to acquire a large number of quantized models.
Then a bit-inheritance scheme is introduced to transfer the quantized models to the lower bit, which further reduces the time cost and improves the quantization accuracy.
arXiv Detail & Related papers (2020-10-09T03:52:16Z) - On the learnability of quantum neural networks [132.1981461292324]
We consider the learnability of the quantum neural network (QNN) built on the variational hybrid quantum-classical scheme.
We show that if a concept can be efficiently learned by QNN, then it can also be effectively learned by QNN even with gate noise.
arXiv Detail & Related papers (2020-07-24T06:34:34Z) - A Co-Design Framework of Neural Networks and Quantum Circuits Towards
Quantum Advantage [37.837850621536475]
In this article, we present the co-design framework, namely QuantumFlow, to provide such a missing link.
QuantumFlow consists of novel quantum-friendly neural networks (QF-Nets), a mapping tool (QF-Map) to generate the quantum circuit (QF-Circ) for QF-Nets, and an execution engine (QF-FB)
Evaluation results show that QF-pNet and QF-hNet can achieve 97.10% and 98.27% accuracy, respectively.
arXiv Detail & Related papers (2020-06-26T06:25:03Z) - Propagating Asymptotic-Estimated Gradients for Low Bitwidth Quantized
Neural Networks [31.168156284218746]
We propose a novel Asymptotic-Quantized Estimator (AQE) to estimate the gradient.
At the end of training, the weights and activations have been quantized to low-precision.
In the inference phase, we can use XNOR or SHIFT operations instead of convolution operations to accelerate the MINW-Net.
arXiv Detail & Related papers (2020-03-04T03:17:47Z) - Widening and Squeezing: Towards Accurate and Efficient QNNs [125.172220129257]
Quantization neural networks (QNNs) are very attractive to the industry because their extremely cheap calculation and storage overhead, but their performance is still worse than that of networks with full-precision parameters.
Most of existing methods aim to enhance performance of QNNs especially binary neural networks by exploiting more effective training techniques.
We address this problem by projecting features in original full-precision networks to high-dimensional quantization features.
arXiv Detail & Related papers (2020-02-03T04:11:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.