Forward Laplacian: A New Computational Framework for Neural
Network-based Variational Monte Carlo
- URL: http://arxiv.org/abs/2307.08214v1
- Date: Mon, 17 Jul 2023 03:14:32 GMT
- Title: Forward Laplacian: A New Computational Framework for Neural
Network-based Variational Monte Carlo
- Authors: Ruichen Li, Haotian Ye, Du Jiang, Xuelan Wen, Chuwei Wang, Zhe Li,
Xiang Li, Di He, Ji Chen, Weiluo Ren, Liwei Wang
- Abstract summary: Neural network-based variational Monte Carlo (NN-VMC) has emerged as a promising cutting-edge technique of ab initio quantum chemistry.
Here, we report the development of a new NN-VMC method that achieves a remarkable speed-up by more than one order of magnitude.
- Score: 31.821891877123527
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural network-based variational Monte Carlo (NN-VMC) has emerged as a
promising cutting-edge technique of ab initio quantum chemistry. However, the
high computational cost of existing approaches hinders their applications in
realistic chemistry problems. Here, we report the development of a new NN-VMC
method that achieves a remarkable speed-up by more than one order of magnitude,
thereby greatly extending the applicability of NN-VMC to larger systems. Our
key design is a novel computational framework named Forward Laplacian, which
computes the Laplacian associated with neural networks, the bottleneck of
NN-VMC, through an efficient forward propagation process. We then demonstrate
that Forward Laplacian is not only versatile but also facilitates more
developments of acceleration methods across various aspects, including
optimization for sparse derivative matrix and efficient neural network design.
Empirically, our approach enables NN-VMC to investigate a broader range of
atoms, molecules and chemical reactions for the first time, providing valuable
references to other ab initio methods. The results demonstrate a great
potential in applying deep learning methods to solve general quantum mechanical
problems.
Related papers
- Enhancing Variational Quantum Circuit Training: An Improved Neural Network Approach for Barren Plateau Mitigation [0.0]
variational quantum algorithms (VQAs) are among the most promising algorithms in near-term quantum computing.
They iteratively update circuit parameters to optimize a cost function.
The training of variational quantum circuits (VQCs) is susceptible to a phenomenon known as barren plateaus (BPs)
arXiv Detail & Related papers (2024-11-14T06:43:37Z) - Scalable Mechanistic Neural Networks [52.28945097811129]
We propose an enhanced neural network framework designed for scientific machine learning applications involving long temporal sequences.
By reformulating the original Mechanistic Neural Network (MNN) we reduce the computational time and space complexities from cubic and quadratic with respect to the sequence length, respectively, to linear.
Extensive experiments demonstrate that S-MNN matches the original MNN in precision while substantially reducing computational resources.
arXiv Detail & Related papers (2024-10-08T14:27:28Z) - Paths towards time evolution with larger neural-network quantum states [17.826631514127012]
We consider a quantum quench from the paramagnetic to the anti-ferromagnetic phase in the tilted Ising model.
We show that for both types of networks, the projected time-dependent variational Monte Carlo (p-tVMC) method performs better than the non-projected approach.
arXiv Detail & Related papers (2024-06-05T15:32:38Z) - NN-Steiner: A Mixed Neural-algorithmic Approach for the Rectilinear
Steiner Minimum Tree Problem [5.107107601277712]
We focus on the rectilinear Steiner minimum tree (RSMT) problem, which is of critical importance in IC layout design.
We propose NN-Steiner, which is a novel mixed neural-algorithmic framework for computing RSMTs.
In particular, NN-Steiner only needs four neural network (NN) components that are called repeatedly within an algorithmic framework.
arXiv Detail & Related papers (2023-12-17T02:42:11Z) - Towards Neural Variational Monte Carlo That Scales Linearly with System
Size [67.09349921751341]
Quantum many-body problems are central to demystifying some exotic quantum phenomena, e.g., high-temperature superconductors.
The combination of neural networks (NN) for representing quantum states, and the Variational Monte Carlo (VMC) algorithm, has been shown to be a promising method for solving such problems.
We propose a NN architecture called Vector-Quantized Neural Quantum States (VQ-NQS) that utilizes vector-quantization techniques to leverage redundancies in the local-energy calculations of the VMC algorithm.
arXiv Detail & Related papers (2022-12-21T19:00:04Z) - Decomposition of Matrix Product States into Shallow Quantum Circuits [62.5210028594015]
tensor network (TN) algorithms can be mapped to parametrized quantum circuits (PQCs)
We propose a new protocol for approximating TN states using realistic quantum circuits.
Our results reveal one particular protocol, involving sequential growth and optimization of the quantum circuit, to outperform all other methods.
arXiv Detail & Related papers (2022-09-01T17:08:41Z) - Low-bit Quantization of Recurrent Neural Network Language Models Using
Alternating Direction Methods of Multipliers [67.688697838109]
This paper presents a novel method to train quantized RNNLMs from scratch using alternating direction methods of multipliers (ADMM)
Experiments on two tasks suggest the proposed ADMM quantization achieved a model size compression factor of up to 31 times over the full precision baseline RNNLMs.
arXiv Detail & Related papers (2021-11-29T09:30:06Z) - QDCNN: Quantum Dilated Convolutional Neural Network [1.52292571922932]
We propose a novel hybrid quantum-classical algorithm called quantum dilated convolutional neural networks (QDCNNs)
Our method extends the concept of dilated convolution, which has been widely applied in modern deep learning algorithms, to the context of hybrid neural networks.
The proposed QDCNNs are able to capture larger context during the quantum convolution process while reducing the computational cost.
arXiv Detail & Related papers (2021-10-29T10:24:34Z) - Deep Multi-Task Learning for Cooperative NOMA: System Design and
Principles [52.79089414630366]
We develop a novel deep cooperative NOMA scheme, drawing upon the recent advances in deep learning (DL)
We develop a novel hybrid-cascaded deep neural network (DNN) architecture such that the entire system can be optimized in a holistic manner.
arXiv Detail & Related papers (2020-07-27T12:38:37Z) - Iterative Algorithm Induced Deep-Unfolding Neural Networks: Precoding
Design for Multiuser MIMO Systems [59.804810122136345]
We propose a framework for deep-unfolding, where a general form of iterative algorithm induced deep-unfolding neural network (IAIDNN) is developed.
An efficient IAIDNN based on the structure of the classic weighted minimum mean-square error (WMMSE) iterative algorithm is developed.
We show that the proposed IAIDNN efficiently achieves the performance of the iterative WMMSE algorithm with reduced computational complexity.
arXiv Detail & Related papers (2020-06-15T02:57:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.