Continuous Variable Quantum MNIST Classifiers
- URL: http://arxiv.org/abs/2204.01194v1
- Date: Mon, 4 Apr 2022 00:51:24 GMT
- Title: Continuous Variable Quantum MNIST Classifiers
- Authors: Sophie Choe
- Abstract summary: Quantum neural network hybrid multiclassifiers are presented using the MNIST dataset.
The total of eight different classifiers are built using 2,3,...,8 qumodes.
On a truncated MNIST dataset of 600 samples, a 4 qumode hybrid classifier achieves 100% training accuracy.
- Score: 0.0
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: In this paper, classical and continuous variable (CV) quantum neural network
hybrid multiclassifiers are presented using the MNIST dataset. The combination
of cutoff dimension and probability measurement method in the CV model allows a
quantum circuit to produce output vectors of size equal to n raised to the
power of n where n represents cutoff dimension and m, the number of qumodes.
They are then translated as one-hot encoded labels, padded with an appropriate
number of zeros. The total of eight different classifiers are built using
2,3,...,8 qumodes, based on the binary classifier architecture proposed in
Continuous variable quantum neural networks. The displacement gate and the Kerr
gate in the CV model allow for the bias addition and nonlinear activation
components of classical neural networks to quantum. The classifiers are
composed of a classical feedforward neural network, a quantum data encoding
circuit, and a CV quantum neural network circuit. On a truncated MNIST dataset
of 600 samples, a 4 qumode hybrid classifier achieves 100% training accuracy.
Related papers
- Single-Qudit Quantum Neural Networks for Multiclass Classification [0.0]
This paper proposes a single-qudit quantum neural network for multiclass classification.
Our design employs an $d$-dimensional unitary operator, where $d$ corresponds to the number of classes.
We evaluate our model on the MNIST and EMNIST datasets, demonstrating competitive accuracy.
arXiv Detail & Related papers (2025-03-12T11:12:05Z) - Quantum Transfer Learning for MNIST Classification Using a Hybrid Quantum-Classical Approach [0.0]
We implement a hybrid quantum-classical model for image classification that compresses MNIST digit images into a low-dimensional feature space.<n>An autoencoder compresses each $28times28$ image (784 pixels) into a 64-dimensional latent vector.<n>We map these features onto a 5-qubit quantum state.
arXiv Detail & Related papers (2024-08-05T22:16:27Z) - A Quantum-Classical Collaborative Training Architecture Based on Quantum
State Fidelity [50.387179833629254]
We introduce a collaborative classical-quantum architecture called co-TenQu.
Co-TenQu enhances a classical deep neural network by up to 41.72% in a fair setting.
It outperforms other quantum-based methods by up to 1.9 times and achieves similar accuracy while utilizing 70.59% fewer qubits.
arXiv Detail & Related papers (2024-02-23T14:09:41Z) - Experimentally Realizable Continuous-variable Quantum Neural Networks [0.0]
Continuous-variable (CV) quantum computing has shown great potential for building neural network models.
Previous work on CV neural network protocols required the implementation of non-Gaussian operators in the network.
We built a CV hybrid quantum-classical neural network protocol that can be realized experimentally with current photonic quantum hardware.
arXiv Detail & Related papers (2023-06-05T01:18:41Z) - Quantum machine learning for image classification [39.58317527488534]
This research introduces two quantum machine learning models that leverage the principles of quantum mechanics for effective computations.
Our first model, a hybrid quantum neural network with parallel quantum circuits, enables the execution of computations even in the noisy intermediate-scale quantum era.
A second model introduces a hybrid quantum neural network with a Quanvolutional layer, reducing image resolution via a convolution process.
arXiv Detail & Related papers (2023-04-18T18:23:20Z) - A performance characterization of quantum generative models [35.974070202997176]
We compare quantum circuits used for quantum generative modeling.
We learn the underlying probability distribution of the data sets via two popular training methods.
We empirically find that a variant of the discrete architecture, which learns the copula of the probability distribution, outperforms all other methods.
arXiv Detail & Related papers (2023-01-23T11:00:29Z) - Mixed Precision Low-bit Quantization of Neural Network Language Models
for Speech Recognition [67.95996816744251]
State-of-the-art language models (LMs) represented by long-short term memory recurrent neural networks (LSTM-RNNs) and Transformers are becoming increasingly complex and expensive for practical applications.
Current quantization methods are based on uniform precision and fail to account for the varying performance sensitivity at different parts of LMs to quantization errors.
Novel mixed precision neural network LM quantization methods are proposed in this paper.
arXiv Detail & Related papers (2021-11-29T12:24:02Z) - Multi-class quantum classifiers with tensor network circuits for quantum
phase recognition [0.0]
Network-inspired circuits have been proposed as a natural choice for variational quantum eigensolver circuits.
We present numerical experiments on multi-class entanglements based on tree tensor network and multiscale renormalization ansatz circuits.
arXiv Detail & Related papers (2021-10-15T21:55:13Z) - Cluster-Promoting Quantization with Bit-Drop for Minimizing Network
Quantization Loss [61.26793005355441]
Cluster-Promoting Quantization (CPQ) finds the optimal quantization grids for neural networks.
DropBits is a new bit-drop technique that revises the standard dropout regularization to randomly drop bits instead of neurons.
We experimentally validate our method on various benchmark datasets and network architectures.
arXiv Detail & Related papers (2021-09-05T15:15:07Z) - Quantum convolutional neural network for classical data classification [0.8057006406834467]
We benchmark fully parameterized quantum convolutional neural networks (QCNNs) for classical data classification.
We propose a quantum neural network model inspired by CNN that only uses two-qubit interactions throughout the entire algorithm.
arXiv Detail & Related papers (2021-08-02T06:48:34Z) - A quantum algorithm for training wide and deep classical neural networks [72.2614468437919]
We show that conditions amenable to classical trainability via gradient descent coincide with those necessary for efficiently solving quantum linear systems.
We numerically demonstrate that the MNIST image dataset satisfies such conditions.
We provide empirical evidence for $O(log n)$ training of a convolutional neural network with pooling.
arXiv Detail & Related papers (2021-07-19T23:41:03Z) - Quantum Federated Learning with Quantum Data [87.49715898878858]
Quantum machine learning (QML) has emerged as a promising field that leans on the developments in quantum computing to explore large complex machine learning problems.
This paper proposes the first fully quantum federated learning framework that can operate over quantum data and, thus, share the learning of quantum circuit parameters in a decentralized manner.
arXiv Detail & Related papers (2021-05-30T12:19:27Z) - An end-to-end trainable hybrid classical-quantum classifier [0.0]
We introduce a hybrid model combining a quantum-inspired tensor network and a variational quantum circuit to perform supervised learning tasks.
This architecture allows for the classical and quantum parts of the model to be trained simultaneously, providing an end-to-end training framework.
arXiv Detail & Related papers (2021-02-04T05:19:54Z) - Hybrid quantum-classical classifier based on tensor network and
variational quantum circuit [0.0]
We introduce a hybrid model combining the quantum-inspired tensor networks (TN) and the variational quantum circuits (VQC) to perform supervised learning tasks.
We show that a matrix product state based TN with low bond dimensions performs better than PCA as a feature extractor to compress data for the input of VQCs in the binary classification of MNIST dataset.
arXiv Detail & Related papers (2020-11-30T09:43:59Z) - Decentralizing Feature Extraction with Quantum Convolutional Neural
Network for Automatic Speech Recognition [101.69873988328808]
We build upon a quantum convolutional neural network (QCNN) composed of a quantum circuit encoder for feature extraction.
An input speech is first up-streamed to a quantum computing server to extract Mel-spectrogram.
The corresponding convolutional features are encoded using a quantum circuit algorithm with random parameters.
The encoded features are then down-streamed to the local RNN model for the final recognition.
arXiv Detail & Related papers (2020-10-26T03:36:01Z) - A Co-Design Framework of Neural Networks and Quantum Circuits Towards
Quantum Advantage [37.837850621536475]
In this article, we present the co-design framework, namely QuantumFlow, to provide such a missing link.
QuantumFlow consists of novel quantum-friendly neural networks (QF-Nets), a mapping tool (QF-Map) to generate the quantum circuit (QF-Circ) for QF-Nets, and an execution engine (QF-FB)
Evaluation results show that QF-pNet and QF-hNet can achieve 97.10% and 98.27% accuracy, respectively.
arXiv Detail & Related papers (2020-06-26T06:25:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.