Non-parametric Greedy Optimization of Parametric Quantum Circuits
- URL: http://arxiv.org/abs/2401.15442v1
- Date: Sat, 27 Jan 2024 15:29:38 GMT
- Title: Non-parametric Greedy Optimization of Parametric Quantum Circuits
- Authors: Koustubh Phalak, Swaroop Ghosh
- Abstract summary: This work aims to reduce depth and gate count of PQCs by replacing parametric gates with their approximate fixed non-parametric representations.
We observe roughly 14% reduction in depth and 48% reduction in gate count at the cost of 3.33% reduction in inferencing accuracy.
- Score: 2.77390041716769
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The use of Quantum Neural Networks (QNN) that are analogous to classical
neural networks, has greatly increased in the past decade owing to the growing
interest in the field of Quantum Machine Learning (QML). A QNN consists of
three major components: (i) data loading/encoding circuit, (ii) Parametric
Quantum Circuit (PQC), and (iii) measurement operations. Under ideal
circumstances the PQC of the QNN trains well, however that may not be the case
for training under quantum hardware due to presence of different kinds of
noise. Deeper QNNs with high depths tend to degrade more in terms of
performance compared to shallower networks. This work aims to reduce depth and
gate count of PQCs by replacing parametric gates with their approximate fixed
non-parametric representations. We propose a greedy algorithm to achieve this
such that the algorithm minimizes a distance metric based on unitary
transformation matrix of original parametric gate and new set of non-parametric
gates. From this greedy optimization followed by a few epochs of re-training,
we observe roughly 14% reduction in depth and 48% reduction in gate count at
the cost of 3.33% reduction in inferencing accuracy. Similar results are
observed for a different dataset as well with different PQC structure.
Related papers
- Adaptive variational quantum dynamics simulations with compressed circuits and fewer measurements [4.2643127089535104]
We show an improved version of the adaptive variational quantum dynamics simulation (AVQDS) method, which we call AVQDS(T)
The algorithm adaptively adds layers of disjoint unitary gates to the ansatz circuit so as to keep the McLachlan distance, a measure of the accuracy of the variational dynamics, below a fixed threshold.
We also show a method based on eigenvalue truncation to solve the linear equations of motion for the variational parameters with enhanced noise resilience.
arXiv Detail & Related papers (2024-08-13T02:56:43Z) - Error-tolerant quantum convolutional neural networks for symmetry-protected topological phases [0.0]
Quantum neural networks based on parametrized quantum circuits, measurements and feed-forward can process large amounts of quantum data.
We construct quantum convolutional neural networks (QCNNs) that can recognize different symmetry-protected topological phases.
We show that the QCNN output is robust against symmetry-breaking errors below a threshold error probability.
arXiv Detail & Related papers (2023-07-07T16:47:02Z) - TopGen: Topology-Aware Bottom-Up Generator for Variational Quantum
Circuits [26.735857677349628]
Variational Quantum Algorithms (VQA) are promising to demonstrate quantum advantages on near-term devices.
Designing ansatz, a variational circuit with parameterized gates, is of paramount importance for VQA.
We propose a bottom-up approach to generate topology-specific ansatz.
arXiv Detail & Related papers (2022-10-15T04:18:41Z) - Symmetric Pruning in Quantum Neural Networks [111.438286016951]
Quantum neural networks (QNNs) exert the power of modern quantum machines.
QNNs with handcraft symmetric ansatzes generally experience better trainability than those with asymmetric ansatzes.
We propose the effective quantum neural tangent kernel (EQNTK) to quantify the convergence of QNNs towards the global optima.
arXiv Detail & Related papers (2022-08-30T08:17:55Z) - Wide Quantum Circuit Optimization with Topology Aware Synthesis [0.8469686352132708]
Unitary synthesis is an optimization technique that can achieve optimal multi-qubit gate counts while mapping quantum circuits to restrictive qubit topologies.
We present TopAS, a topology aware synthesis tool built with the emphBQSKit framework that preconditions quantum circuits before mapping.
arXiv Detail & Related papers (2022-06-27T21:59:30Z) - Tensor Ring Parametrized Variational Quantum Circuits for Large Scale
Quantum Machine Learning [28.026962110693695]
We propose an algorithm that compresses the quantum state within the circuit using a tensor ring representation.
The storage and computational time increases linearly in the number of qubits and number of layers, as compared to the exponential increase with exact simulation algorithms.
We achieve a test accuracy of 83.33% on Iris dataset and a maximum of 99.30% and 76.31% on binary and ternary classification of MNIST dataset.
arXiv Detail & Related papers (2022-01-21T19:54:57Z) - Toward Trainability of Deep Quantum Neural Networks [87.04438831673063]
Quantum Neural Networks (QNNs) with random structures have poor trainability due to the exponentially vanishing gradient as the circuit depth and the qubit number increase.
We provide the first viable solution to the vanishing gradient problem for deep QNNs with theoretical guarantees.
arXiv Detail & Related papers (2021-12-30T10:27:08Z) - Mixed Precision Low-bit Quantization of Neural Network Language Models
for Speech Recognition [67.95996816744251]
State-of-the-art language models (LMs) represented by long-short term memory recurrent neural networks (LSTM-RNNs) and Transformers are becoming increasingly complex and expensive for practical applications.
Current quantization methods are based on uniform precision and fail to account for the varying performance sensitivity at different parts of LMs to quantization errors.
Novel mixed precision neural network LM quantization methods are proposed in this paper.
arXiv Detail & Related papers (2021-11-29T12:24:02Z) - On the learnability of quantum neural networks [132.1981461292324]
We consider the learnability of the quantum neural network (QNN) built on the variational hybrid quantum-classical scheme.
We show that if a concept can be efficiently learned by QNN, then it can also be effectively learned by QNN even with gate noise.
arXiv Detail & Related papers (2020-07-24T06:34:34Z) - APQ: Joint Search for Network Architecture, Pruning and Quantization
Policy [49.3037538647714]
We present APQ for efficient deep learning inference on resource-constrained hardware.
Unlike previous methods that separately search the neural architecture, pruning policy, and quantization policy, we optimize them in a joint manner.
With the same accuracy, APQ reduces the latency/energy by 2x/1.3x over MobileNetV2+HAQ.
arXiv Detail & Related papers (2020-06-15T16:09:17Z) - Improving the Performance of Deep Quantum Optimization Algorithms with
Continuous Gate Sets [47.00474212574662]
Variational quantum algorithms are believed to be promising for solving computationally hard problems.
In this paper, we experimentally investigate the circuit-depth-dependent performance of QAOA applied to exact-cover problem instances.
Our results demonstrate that the use of continuous gate sets may be a key component in extending the impact of near-term quantum computers.
arXiv Detail & Related papers (2020-05-11T17:20:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.