Natural evolutionary strategies applied to quantum-classical hybrid
neural networks
- URL: http://arxiv.org/abs/2205.08059v1
- Date: Tue, 17 May 2022 02:14:44 GMT
- Title: Natural evolutionary strategies applied to quantum-classical hybrid
neural networks
- Authors: Lucas Friedrich and Jonas Maziero
- Abstract summary: We study an alternative method, called Natural Evolutionary Strategies (NES), which are a family of black box optimization algorithms.
We apply the NES method to the binary classification task, showing that this method is a viable alternative for training quantum neural networks.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the rapid development of quantum computers, several applications are
being proposed for them. Quantum simulations, simulation of chemical reactions,
solution of optimization problems and quantum neural networks are some
examples. However, problems such as noise, limited number of qubits and circuit
depth, and gradient vanishing must be resolved before we can use them to their
full potential. In the field of quantum machine learning, several models have
been proposed. In general, in order to train these different models, we use the
gradient of a cost function with respect to the model parameters. In order to
obtain this gradient, we must compute the derivative of this function with
respect to the model parameters. For this we can use the method called
parameter-shift rule. This method consists of evaluating the cost function
twice for each parameter of the quantum network. A problem with this method is
that the number of evaluations grows linearly with the number of parameters. In
this work we study an alternative method, called Natural Evolutionary
Strategies (NES), which are a family of black box optimization algorithms. An
advantage of the NES method is that in using it one can control the number of
times the cost function will be evaluated. We apply the NES method to the
binary classification task, showing that this method is a viable alternative
for training quantum neural networks.
Related papers
- Non-binary artificial neuron with phase variation implemented on a quantum computer [0.0]
We introduce an algorithm that generalizes the binary model manipulating the phase of complex numbers.
We propose, test, and implement a neuron model that works with continuous values in a quantum computer.
arXiv Detail & Related papers (2024-10-30T18:18:53Z) - Training Classical Neural Networks by Quantum Machine Learning [9.002305736350833]
This work proposes a training scheme for classical neural networks (NNs) that utilizes the exponentially large Hilbert space of a quantum system.
Unlike existing quantum machine learning (QML) methods, the results obtained from quantum computers using our approach can be directly used on classical computers.
arXiv Detail & Related papers (2024-02-26T10:16:21Z) - Learning To Optimize Quantum Neural Network Without Gradients [3.9848482919377006]
We introduce a novel meta-optimization algorithm that trains a emphmeta-optimizer network to output parameters for the quantum circuit.
We show that we achieve a better quality minima in fewer circuit evaluations than existing gradient based algorithms on different datasets.
arXiv Detail & Related papers (2023-04-15T01:09:12Z) - Towards Neural Variational Monte Carlo That Scales Linearly with System
Size [67.09349921751341]
Quantum many-body problems are central to demystifying some exotic quantum phenomena, e.g., high-temperature superconductors.
The combination of neural networks (NN) for representing quantum states, and the Variational Monte Carlo (VMC) algorithm, has been shown to be a promising method for solving such problems.
We propose a NN architecture called Vector-Quantized Neural Quantum States (VQ-NQS) that utilizes vector-quantization techniques to leverage redundancies in the local-energy calculations of the VMC algorithm.
arXiv Detail & Related papers (2022-12-21T19:00:04Z) - Accelerating the training of single-layer binary neural networks using
the HHL quantum algorithm [58.720142291102135]
We show that useful information can be extracted from the quantum-mechanical implementation of Harrow-Hassidim-Lloyd (HHL)
This paper shows, however, that useful information can be extracted from the quantum-mechanical implementation of HHL, and used to reduce the complexity of finding the solution on the classical side.
arXiv Detail & Related papers (2022-10-23T11:58:05Z) - Avoiding Barren Plateaus with Classical Deep Neural Networks [0.0]
Vari quantum algorithms (VQAs) are among the most promising algorithms in the era of Noisy Intermediate Scale Quantum Devices.
VQAs are applied to a variety of tasks, such as in chemistry simulations, optimization problems, and quantum neural networks.
We report on how the use of a classical neural networks in the VQAs input parameters can alleviate the Barren Plateaus phenomenon.
arXiv Detail & Related papers (2022-05-26T15:14:01Z) - Quantum Kernel Methods for Solving Differential Equations [21.24186888129542]
We propose several approaches for solving differential equations (DEs) with quantum kernel methods.
We compose quantum models as weighted sums of kernel functions, where variables are encoded using feature maps and model derivatives are represented.
arXiv Detail & Related papers (2022-03-16T18:56:35Z) - Twisted hybrid algorithms for combinatorial optimization [68.8204255655161]
Proposed hybrid algorithms encode a cost function into a problem Hamiltonian and optimize its energy by varying over a set of states with low circuit complexity.
We show that for levels $p=2,ldots, 6$, the level $p$ can be reduced by one while roughly maintaining the expected approximation ratio.
arXiv Detail & Related papers (2022-03-01T19:47:16Z) - Training Feedback Spiking Neural Networks by Implicit Differentiation on
the Equilibrium State [66.2457134675891]
Spiking neural networks (SNNs) are brain-inspired models that enable energy-efficient implementation on neuromorphic hardware.
Most existing methods imitate the backpropagation framework and feedforward architectures for artificial neural networks.
We propose a novel training method that does not rely on the exact reverse of the forward computation.
arXiv Detail & Related papers (2021-09-29T07:46:54Z) - A quantum algorithm for training wide and deep classical neural networks [72.2614468437919]
We show that conditions amenable to classical trainability via gradient descent coincide with those necessary for efficiently solving quantum linear systems.
We numerically demonstrate that the MNIST image dataset satisfies such conditions.
We provide empirical evidence for $O(log n)$ training of a convolutional neural network with pooling.
arXiv Detail & Related papers (2021-07-19T23:41:03Z) - Preparation of excited states for nuclear dynamics on a quantum computer [117.44028458220427]
We study two different methods to prepare excited states on a quantum computer.
We benchmark these techniques on emulated and real quantum devices.
These findings show that quantum techniques designed to achieve good scaling on fault tolerant devices might also provide practical benefits on devices with limited connectivity and gate fidelity.
arXiv Detail & Related papers (2020-09-28T17:21:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.