Tensor Ring Parametrized Variational Quantum Circuits for Large Scale
Quantum Machine Learning
- URL: http://arxiv.org/abs/2201.08878v1
- Date: Fri, 21 Jan 2022 19:54:57 GMT
- Title: Tensor Ring Parametrized Variational Quantum Circuits for Large Scale
Quantum Machine Learning
- Authors: Dheeraj Peddireddy, Vipul Bansal, Zubin Jacob, and Vaneet Aggarwal
- Abstract summary: We propose an algorithm that compresses the quantum state within the circuit using a tensor ring representation.
The storage and computational time increases linearly in the number of qubits and number of layers, as compared to the exponential increase with exact simulation algorithms.
We achieve a test accuracy of 83.33% on Iris dataset and a maximum of 99.30% and 76.31% on binary and ternary classification of MNIST dataset.
- Score: 28.026962110693695
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Quantum Machine Learning (QML) is an emerging research area advocating the
use of quantum computing for advancement in machine learning. Since the
discovery of the capability of Parametrized Variational Quantum Circuits (VQC)
to replace Artificial Neural Networks, they have been widely adopted to
different tasks in Quantum Machine Learning. However, despite their potential
to outperform neural networks, VQCs are limited to small scale applications
given the challenges in scalability of quantum circuits. To address this
shortcoming, we propose an algorithm that compresses the quantum state within
the circuit using a tensor ring representation. Using the input qubit state in
the tensor ring representation, single qubit gates maintain the tensor ring
representation. However, the same is not true for two qubit gates in general,
where an approximation is used to have the output as a tensor ring
representation. Using this approximation, the storage and computational time
increases linearly in the number of qubits and number of layers, as compared to
the exponential increase with exact simulation algorithms. This approximation
is used to implement the tensor ring VQC. The training of the parameters of
tensor ring VQC is performed using a gradient descent based algorithm, where
efficient approaches for backpropagation are used. The proposed approach is
evaluated on two datasets: Iris and MNIST for the classification task to show
the improved accuracy using more number of qubits. We achieve a test accuracy
of 83.33\% on Iris dataset and a maximum of 99.30\% and 76.31\% on binary and
ternary classification of MNIST dataset using various circuit architectures.
The results from the IRIS dataset outperform the results on VQC implemented on
Qiskit, and being scalable, demonstrates the potential for VQCs to be used for
large scale Quantum Machine Learning applications.
Related papers
- Non-parametric Greedy Optimization of Parametric Quantum Circuits [2.77390041716769]
This work aims to reduce depth and gate count of PQCs by replacing parametric gates with their approximate fixed non-parametric representations.
We observe roughly 14% reduction in depth and 48% reduction in gate count at the cost of 3.33% reduction in inferencing accuracy.
arXiv Detail & Related papers (2024-01-27T15:29:38Z) - Training Multi-layer Neural Networks on Ising Machine [41.95720316032297]
This paper proposes an Ising learning algorithm to train quantized neural network (QNN)
As far as we know, this is the first algorithm to train multi-layer feedforward networks on Ising machines.
arXiv Detail & Related papers (2023-11-06T04:09:15Z) - Weight Re-Mapping for Variational Quantum Algorithms [54.854986762287126]
We introduce the concept of weight re-mapping for variational quantum circuits (VQCs)
We employ seven distinct weight re-mapping functions to assess their impact on eight classification datasets.
Our results indicate that weight re-mapping can enhance the convergence speed of the VQC.
arXiv Detail & Related papers (2023-06-09T09:42:21Z) - Learning To Optimize Quantum Neural Network Without Gradients [3.9848482919377006]
We introduce a novel meta-optimization algorithm that trains a emphmeta-optimizer network to output parameters for the quantum circuit.
We show that we achieve a better quality minima in fewer circuit evaluations than existing gradient based algorithms on different datasets.
arXiv Detail & Related papers (2023-04-15T01:09:12Z) - Variational Quantum Eigensolver for Classification in Credit Sales Risk [0.5524804393257919]
We take into consideration a quantum circuit which is based on the Variational Quantum Eigensolver (VQE) and so-called SWAP-Test.
In the utilized data set, two classes may be observed -- cases with low and high credit risk.
The solution is compact and requires only logarithmically increasing number of qubits.
arXiv Detail & Related papers (2023-03-05T23:08:39Z) - TeD-Q: a tensor network enhanced distributed hybrid quantum machine
learning framework [59.07246314484875]
TeD-Q is an open-source software framework for quantum machine learning.
It seamlessly integrates classical machine learning libraries with quantum simulators.
It provides a graphical mode in which the quantum circuit and the training progress can be visualized in real-time.
arXiv Detail & Related papers (2023-01-13T09:35:05Z) - Improving Convergence for Quantum Variational Classifiers using Weight
Re-Mapping [60.086820254217336]
In recent years, quantum machine learning has seen a substantial increase in the use of variational quantum circuits (VQCs)
We introduce weight re-mapping for VQCs, to unambiguously map the weights to an interval of length $2pi$.
We demonstrate that weight re-mapping increased test accuracy for the Wine dataset by $10%$ over using unmodified weights.
arXiv Detail & Related papers (2022-12-22T13:23:19Z) - TopGen: Topology-Aware Bottom-Up Generator for Variational Quantum
Circuits [26.735857677349628]
Variational Quantum Algorithms (VQA) are promising to demonstrate quantum advantages on near-term devices.
Designing ansatz, a variational circuit with parameterized gates, is of paramount importance for VQA.
We propose a bottom-up approach to generate topology-specific ansatz.
arXiv Detail & Related papers (2022-10-15T04:18:41Z) - A quantum algorithm for training wide and deep classical neural networks [72.2614468437919]
We show that conditions amenable to classical trainability via gradient descent coincide with those necessary for efficiently solving quantum linear systems.
We numerically demonstrate that the MNIST image dataset satisfies such conditions.
We provide empirical evidence for $O(log n)$ training of a convolutional neural network with pooling.
arXiv Detail & Related papers (2021-07-19T23:41:03Z) - On the learnability of quantum neural networks [132.1981461292324]
We consider the learnability of the quantum neural network (QNN) built on the variational hybrid quantum-classical scheme.
We show that if a concept can be efficiently learned by QNN, then it can also be effectively learned by QNN even with gate noise.
arXiv Detail & Related papers (2020-07-24T06:34:34Z) - Supervised Learning Using a Dressed Quantum Network with "Super
Compressed Encoding": Algorithm and Quantum-Hardware-Based Implementation [7.599675376503671]
Implementation of variational Quantum Machine Learning (QML) algorithms on Noisy Intermediate-Scale Quantum (NISQ) devices has issues related to the high number of qubits needed and the noise associated with multi-qubit gates.
We propose a variational QML algorithm using a dressed quantum network to address these issues.
Unlike in most other existing QML algorithms, our quantum circuit consists only of single-qubit gates, making it robust against noise.
arXiv Detail & Related papers (2020-07-20T16:29:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.