Learning Neural Network Quantum States with the Linear Method
- URL: http://arxiv.org/abs/2104.11011v1
- Date: Thu, 22 Apr 2021 12:18:33 GMT
- Title: Learning Neural Network Quantum States with the Linear Method
- Authors: J. Thorben Frank, Michael J. Kastoryano
- Abstract summary: We show that the linear method can be used successfully for the optimization of complex valued neural network quantum states.
We compare the LM to the state-of-the-art SR algorithm and find that the LM requires up to an order of magnitude fewer iterations for convergence.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Due to the strong correlations present in quantum systems, classical machine
learning algorithms like stochastic gradient descent are often insufficient for
the training of neural network quantum states (NQSs). These difficulties can be
overcome by using physically inspired learning algorithm, the most prominent of
which is the stochastic reconfiguration (SR) which mimics imaginary time
evolution. Here we explore an alternative algorithms for the optimization of
complex valued NQSs based on the linear method (LM), and present the explicit
formulation in terms of complex valued parameters. Beyond the theoretical
formulation, we present numerical evidence that the LM can be used successfully
for the optimization of complex valued NQSs, to our knowledge for the first
time. We compare the LM to the state-of-the-art SR algorithm and find that the
LM requires up to an order of magnitude fewer iterations for convergence,
albeit at a higher cost per epoch. We further demonstrate that the LM becomes
the more efficient training algorithm whenever the cost of sampling is high.
This advantage, however, comes at the price of a larger variance.
Related papers
- Enhancing Open Quantum Dynamics Simulations Using Neural Network-Based Non-Markovian Stochastic Schrödinger Equation Method [2.9413085575648235]
We propose a scheme that combines neural network techniques with simulations of the non-Markovian Schrodinger equation.
This approach significantly reduces the number of trajectories required for long-time simulations, particularly at low temperatures.
arXiv Detail & Related papers (2024-11-24T16:57:07Z) - Federated Conditional Stochastic Optimization [110.513884892319]
Conditional optimization has found in a wide range of machine learning tasks, such as in-variant learning tasks, AUPRC, andAML.
This paper proposes algorithms for distributed federated learning.
arXiv Detail & Related papers (2023-10-04T01:47:37Z) - Representation Learning with Multi-Step Inverse Kinematics: An Efficient
and Optimal Approach to Rich-Observation RL [106.82295532402335]
Existing reinforcement learning algorithms suffer from computational intractability, strong statistical assumptions, and suboptimal sample complexity.
We provide the first computationally efficient algorithm that attains rate-optimal sample complexity with respect to the desired accuracy level.
Our algorithm, MusIK, combines systematic exploration with representation learning based on multi-step inverse kinematics.
arXiv Detail & Related papers (2023-04-12T14:51:47Z) - Loop Unrolled Shallow Equilibrium Regularizer (LUSER) -- A
Memory-Efficient Inverse Problem Solver [26.87738024952936]
In inverse problems we aim to reconstruct some underlying signal of interest from potentially corrupted and often ill-posed measurements.
We propose an LU algorithm with shallow equilibrium regularizers (L)
These implicit models are as expressive as deeper convolutional networks, but far more memory efficient during training.
arXiv Detail & Related papers (2022-10-10T19:50:37Z) - Mixed Precision Low-bit Quantization of Neural Network Language Models
for Speech Recognition [67.95996816744251]
State-of-the-art language models (LMs) represented by long-short term memory recurrent neural networks (LSTM-RNNs) and Transformers are becoming increasingly complex and expensive for practical applications.
Current quantization methods are based on uniform precision and fail to account for the varying performance sensitivity at different parts of LMs to quantization errors.
Novel mixed precision neural network LM quantization methods are proposed in this paper.
arXiv Detail & Related papers (2021-11-29T12:24:02Z) - Exponential Error Convergence in Data Classification with Optimized
Random Features: Acceleration by Quantum Machine Learning [8.98526174345299]
An algorithm for machine learning by quantum computer, quantum machine learning (QML), can exponentially speed up sampling of optimized random features.
We here construct a QML algorithm for a classification task accelerated by the optimized random features.
We prove that the QML algorithm for optimized random features, combined with gradient descent (SGD), can achieve state-of-the-art exponential convergence speed.
arXiv Detail & Related papers (2021-06-16T18:00:00Z) - An optimal quantum sampling regression algorithm for variational
eigensolving in the low qubit number regime [0.0]
We introduce Quantum Sampling Regression (QSR), an alternative hybrid quantum-classical algorithm.
We analyze some of its use cases based on time complexity in the low qubit number regime.
We demonstrate the efficacy of our algorithm for a benchmark problem.
arXiv Detail & Related papers (2020-12-04T00:01:15Z) - Adaptive Sampling for Best Policy Identification in Markov Decision
Processes [79.4957965474334]
We investigate the problem of best-policy identification in discounted Markov Decision (MDPs) when the learner has access to a generative model.
The advantages of state-of-the-art algorithms are discussed and illustrated.
arXiv Detail & Related papers (2020-09-28T15:22:24Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - Deep unfolding of the weighted MMSE beamforming algorithm [9.518010235273783]
We propose the novel application of deep unfolding to the WMMSE algorithm for a MISO downlink channel.
Deep unfolding naturally incorporates expert knowledge, with the benefits of immediate and well-grounded architecture selection, fewer trainable parameters, and better explainability.
By means of simulations, we show that, in most of the settings, the unfolded WMMSE outperforms or performs equally to the WMMSE for a fixed number of iterations.
arXiv Detail & Related papers (2020-06-15T14:51:20Z) - Iterative Algorithm Induced Deep-Unfolding Neural Networks: Precoding
Design for Multiuser MIMO Systems [59.804810122136345]
We propose a framework for deep-unfolding, where a general form of iterative algorithm induced deep-unfolding neural network (IAIDNN) is developed.
An efficient IAIDNN based on the structure of the classic weighted minimum mean-square error (WMMSE) iterative algorithm is developed.
We show that the proposed IAIDNN efficiently achieves the performance of the iterative WMMSE algorithm with reduced computational complexity.
arXiv Detail & Related papers (2020-06-15T02:57:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.