Computing the gradients with respect to all parameters of a quantum
neural network using a single circuit
- URL: http://arxiv.org/abs/2307.08167v2
- Date: Thu, 20 Jul 2023 23:08:11 GMT
- Title: Computing the gradients with respect to all parameters of a quantum
neural network using a single circuit
- Authors: Guang Ping He
- Abstract summary: We propose an approach to compute all the gradients using a single circuit only, with a much reduced circuit depth and less classical registers.
We also demonstrate experimentally, on both real quantum hardware and simulator, that our approach has the advantages that the circuit takes a significantly shorter time to compile than the conventional approach.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: When computing the gradients of a quantum neural network using the
parameter-shift rule, the cost function needs to be calculated twice for the
gradient with respect to a single adjustable parameter of the network. When the
total number of parameters is high, the quantum circuit for the computation has
to be adjusted and run for many times. Here we propose an approach to compute
all the gradients using a single circuit only, with a much reduced circuit
depth and less classical registers. We also demonstrate experimentally, on both
real quantum hardware and simulator, that our approach has the advantages that
the circuit takes a significantly shorter time to compile than the conventional
approach, resulting in a speedup on the total runtime.
Related papers
- Backpropagation scaling in parameterised quantum circuits [0.0]
We introduce circuits that are not known to be classically simulable and admit gradient estimation with significantly fewer circuits.
Specifically, these circuits allow for fast estimation of the gradient, higher order partial derivatives and the Fisher information matrix.
In a toy classification problem on 16 qubits, such circuits show competitive performance with other methods, while reducing the training cost by about two orders of magnitude.
arXiv Detail & Related papers (2023-06-26T18:00:09Z) - Parsimonious Optimisation of Parameters in Variational Quantum Circuits [1.303764728768944]
We propose a novel Quantum-Gradient Sampling that requires the execution of at most two circuits per iteration to update the optimisable parameters.
Our proposed method achieves similar convergence rates to classical gradient descent, and empirically outperforms gradient coordinate descent, and SPSA.
arXiv Detail & Related papers (2023-06-20T18:50:18Z) - Approximate Quantum Compiling for Quantum Simulation: A Tensor Network based approach [1.237454174824584]
We introduce AQCtensor, a novel algorithm to produce short-depth quantum circuits from Matrix Product States (MPS)
Our approach is specifically tailored to the preparation of quantum states generated from the time evolution of quantum many-body Hamiltonians.
For simulation problems on 100 qubits, we show that AQCtensor achieves at least an order of magnitude reduction in the depth of the resulting optimized circuit.
arXiv Detail & Related papers (2023-01-20T14:40:29Z) - Decomposition of Matrix Product States into Shallow Quantum Circuits [62.5210028594015]
tensor network (TN) algorithms can be mapped to parametrized quantum circuits (PQCs)
We propose a new protocol for approximating TN states using realistic quantum circuits.
Our results reveal one particular protocol, involving sequential growth and optimization of the quantum circuit, to outperform all other methods.
arXiv Detail & Related papers (2022-09-01T17:08:41Z) - Gaussian initializations help deep variational quantum circuits escape
from the barren plateau [87.04438831673063]
Variational quantum circuits have been widely employed in quantum simulation and quantum machine learning in recent years.
However, quantum circuits with random structures have poor trainability due to the exponentially vanishing gradient with respect to the circuit depth and the qubit number.
This result leads to a general belief that deep quantum circuits will not be feasible for practical tasks.
arXiv Detail & Related papers (2022-03-17T15:06:40Z) - Reducing the Depth of Linear Reversible Quantum Circuits [0.0]
In quantum computing the decoherence time of the qubits determines the computation time available.
We propose a practical formulation of a divide and conquer algorithm that produces quantum circuits that are twice as shallow as those produced by existing algorithms.
Overall, we manage to consistently reduce the total depth of a class of reversible functions, with up to 92% savings in an ancilla-free case and up to 99% when ancillary qubits are available.
arXiv Detail & Related papers (2022-01-17T12:36:32Z) - A quantum algorithm for training wide and deep classical neural networks [72.2614468437919]
We show that conditions amenable to classical trainability via gradient descent coincide with those necessary for efficiently solving quantum linear systems.
We numerically demonstrate that the MNIST image dataset satisfies such conditions.
We provide empirical evidence for $O(log n)$ training of a convolutional neural network with pooling.
arXiv Detail & Related papers (2021-07-19T23:41:03Z) - Fast Swapping in a Quantum Multiplier Modelled as a Queuing Network [64.1951227380212]
We propose that quantum circuits can be modeled as queuing networks.
Our method is scalable and has the potential speed and precision necessary for large scale quantum circuit compilation.
arXiv Detail & Related papers (2021-06-26T10:55:52Z) - Optimal qubit assignment and routing via integer programming [0.22940141855172028]
We consider the problem of mapping a logical quantum circuit onto a given hardware with limited two-qubit connectivity.
We model this problem as an integer linear program, using a network flow formulation with binary variables.
We consider several cost functions: an approximation of the fidelity of the circuit, its total depth, and a measure of cross-talk.
arXiv Detail & Related papers (2021-06-11T15:02:26Z) - Ps and Qs: Quantization-aware pruning for efficient low latency neural
network inference [56.24109486973292]
We study the interplay between pruning and quantization during the training of neural networks for ultra low latency applications.
We find that quantization-aware pruning yields more computationally efficient models than either pruning or quantization alone for our task.
arXiv Detail & Related papers (2021-02-22T19:00:05Z) - Boundaries of quantum supremacy via random circuit sampling [69.16452769334367]
Google's recent quantum supremacy experiment heralded a transition point where quantum computing performed a computational task, random circuit sampling.
We examine the constraints of the observed quantum runtime advantage in a larger number of qubits and gates.
arXiv Detail & Related papers (2020-05-05T20:11:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.