Continuous-variable optimization with neural network quantum states
- URL: http://arxiv.org/abs/2108.03325v3
- Date: Thu, 6 Jan 2022 18:53:12 GMT
- Title: Continuous-variable optimization with neural network quantum states
- Authors: Yabin Zhang, David Gorsich, Paramsothy Jayakumar, Shravan Veerapaneni
- Abstract summary: We investigate the utility of continuous-variable neural network quantum states (CV-NQS) for performing continuous optimization.
Numerical experiments conducted using variational Monte Carlo with CV-NQS indicate that although the non-local algorithm succeeds in finding ground states competitive with the local gradient search methods, the proposal suffers from unfavorable scaling.
- Score: 6.791920570692005
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Inspired by proposals for continuous-variable quantum approximate
optimization (CV-QAOA), we investigate the utility of continuous-variable
neural network quantum states (CV-NQS) for performing continuous optimization,
focusing on the ground state optimization of the classical antiferromagnetic
rotor model. Numerical experiments conducted using variational Monte Carlo with
CV-NQS indicate that although the non-local algorithm succeeds in finding
ground states competitive with the local gradient search methods, the proposal
suffers from unfavorable scaling. A number of proposed extensions are put
forward which may help alleviate the scaling difficulty.
Related papers
- Application of Langevin Dynamics to Advance the Quantum Natural Gradient Optimization Algorithm [47.47843839099175]
A Quantum Natural Gradient (QNG) algorithm for optimization of variational quantum circuits has been proposed recently.
In this study, we employ the Langevin equation with a QNG force to demonstrate that its discrete-time solution gives a generalized form, which we call Momentum-QNG.
arXiv Detail & Related papers (2024-09-03T15:21:16Z) - Improved Optimization for the Neural-network Quantum States and Tests on the Chromium Dimer [11.985673663540688]
Neural-network Quantum States (NQS) has significantly advanced wave function ansatz research.
This work introduces three algorithmic enhancements to reduce computational demands of VMC optimization using NQS.
arXiv Detail & Related papers (2024-04-14T15:07:57Z) - Federated Conditional Stochastic Optimization [110.513884892319]
Conditional optimization has found in a wide range of machine learning tasks, such as in-variant learning tasks, AUPRC, andAML.
This paper proposes algorithms for distributed federated learning.
arXiv Detail & Related papers (2023-10-04T01:47:37Z) - Scalable Imaginary Time Evolution with Neural Network Quantum States [0.0]
The representation of a quantum wave function as a neural network quantum state (NQS) provides a powerful variational ansatz for finding the ground states of many-body quantum systems.
We introduce an approach that bypasses the computation of the metric tensor and instead relies exclusively on first-order descent with Euclidean metric.
We make this method adaptive and stable by determining the optimal time step and keeping the target fixed until the energy of the NQS decreases.
arXiv Detail & Related papers (2023-07-28T12:26:43Z) - Neural network quantum state with proximal optimization: a ground-state
searching scheme based on variational Monte Carlo [4.772126473623257]
We introduce a novel objective function with proximal optimization (PO) that enables multiple updates via reusing the mismatched samples.
We investigate the performance of our VMC-PO algorithm for ground-state searching with a 1-dimensional transverse-field Ising model and 2-dimensional Heisenberg antiferromagnet on a square lattice.
arXiv Detail & Related papers (2022-10-29T04:55:39Z) - Decomposition of Matrix Product States into Shallow Quantum Circuits [62.5210028594015]
tensor network (TN) algorithms can be mapped to parametrized quantum circuits (PQCs)
We propose a new protocol for approximating TN states using realistic quantum circuits.
Our results reveal one particular protocol, involving sequential growth and optimization of the quantum circuit, to outperform all other methods.
arXiv Detail & Related papers (2022-09-01T17:08:41Z) - Avoiding barren plateaus via transferability of smooth solutions in
Hamiltonian Variational Ansatz [0.0]
Variational Quantum Algorithms (VQAs) represent leading candidates to achieve computational speed-ups on current quantum devices.
Two major hurdles are the proliferation of low-quality variational local minima, and the exponential vanishing of gradients in the cost function landscape.
Here we show that by employing iterative search schemes one can effectively prepare the ground state of paradigmatic quantum many-body models.
arXiv Detail & Related papers (2022-06-04T12:52:29Z) - Positive-Negative Momentum: Manipulating Stochastic Gradient Noise to
Improve Generalization [89.7882166459412]
gradient noise (SGN) acts as implicit regularization for deep learning.
Some works attempted to artificially simulate SGN by injecting random noise to improve deep learning.
For simulating SGN at low computational costs and without changing the learning rate or batch size, we propose the Positive-Negative Momentum (PNM) approach.
arXiv Detail & Related papers (2021-03-31T16:08:06Z) - Adaptive pruning-based optimization of parameterized quantum circuits [62.997667081978825]
Variisy hybrid quantum-classical algorithms are powerful tools to maximize the use of Noisy Intermediate Scale Quantum devices.
We propose a strategy for such ansatze used in variational quantum algorithms, which we call "Efficient Circuit Training" (PECT)
Instead of optimizing all of the ansatz parameters at once, PECT launches a sequence of variational algorithms.
arXiv Detail & Related papers (2020-10-01T18:14:11Z) - Optimal Gradient Quantization Condition for Communication-Efficient
Distributed Training [99.42912552638168]
Communication of gradients is costly for training deep neural networks with multiple devices in computer vision applications.
In this work, we deduce the optimal condition of both the binary and multi-level gradient quantization for textbfANY gradient distribution.
Based on the optimal condition, we develop two novel quantization schemes: biased BinGrad and unbiased ORQ for binary and multi-level gradient quantization respectively.
arXiv Detail & Related papers (2020-02-25T18:28:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.