Encoding optimization for quantum machine learning demonstrated on a
superconducting transmon qutrit
- URL: http://arxiv.org/abs/2309.13036v1
- Date: Fri, 22 Sep 2023 17:53:16 GMT
- Title: Encoding optimization for quantum machine learning demonstrated on a
superconducting transmon qutrit
- Authors: Shuxiang Cao, Weixi Zhang, Jules Tilly, Abhishek Agarwal, Mustafa
Bakr, Giulio Campanaro, Simone D Fasciati, James Wills, Boris Shteynas, Vivek
Chidambaram, Peter Leek and Ivan Rungger
- Abstract summary: Three-level quantum systems have the advantage of requiring fewer components than the typically used two-level qubits.
This work investigates the potential of qutrit parametric circuits in machine learning classification applications.
- Score: 1.6460874590065597
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Qutrits, three-level quantum systems, have the advantage of potentially
requiring fewer components than the typically used two-level qubits to
construct equivalent quantum circuits. This work investigates the potential of
qutrit parametric circuits in machine learning classification applications. We
propose and evaluate different data-encoding schemes for qutrits, and find that
the classification accuracy varies significantly depending on the used
encoding. We therefore propose a training method for encoding optimization that
allows to consistently achieve high classification accuracy. Our theoretical
analysis and numerical simulations indicate that the qutrit classifier can
achieve high classification accuracy using fewer components than a comparable
qubit system. We showcase the qutrit classification using the optimized
encoding method on superconducting transmon qutrits, demonstrating the
practicality of the proposed method on noisy hardware. Our work demonstrates
high-precision ternary classification using fewer circuit elements,
establishing qutrit parametric quantum circuits as a viable and efficient tool
for quantum machine learning applications.
Related papers
- Triplet Loss Based Quantum Encoding for Class Separability [2.7641963278515114]
The encoding circuit is trained using a triplet loss function inspired by classical facial recognition algorithms.<n> Benchmark tests performed on various binary classification tasks on MNIST and MedMNIST datasets demonstrate considerable improvement over amplitude encoding with the same VQC structure.
arXiv Detail & Related papers (2025-09-19T07:28:19Z) - Provably Robust Training of Quantum Circuit Classifiers Against Parameter Noise [49.97673761305336]
Noise remains a major obstacle to achieving reliable quantum algorithms.<n>We present a provably noise-resilient training theory and algorithm to enhance the robustness of parameterized quantum circuit classifiers.
arXiv Detail & Related papers (2025-05-24T02:51:34Z) - An Efficient Quantum Classifier Based on Hamiltonian Representations [50.467930253994155]
Quantum machine learning (QML) is a discipline that seeks to transfer the advantages of quantum computing to data-driven tasks.
We propose an efficient approach that circumvents the costs associated with data encoding by mapping inputs to a finite set of Pauli strings.
We evaluate our approach on text and image classification tasks, against well-established classical and quantum models.
arXiv Detail & Related papers (2025-04-13T11:49:53Z) - Quantum autoencoders for image classification [0.0]
Quantum autoencoders (QAEs) leverage classical optimization solely for parameter tuning.<n>This study introduces a novel image-classification approach using QAEs, achieving classification without requiring additional qubits.
arXiv Detail & Related papers (2025-02-21T07:13:38Z) - Efficient Learning for Linear Properties of Bounded-Gate Quantum Circuits [63.733312560668274]
Given a quantum circuit containing d tunable RZ gates and G-d Clifford gates, can a learner perform purely classical inference to efficiently predict its linear properties?
We prove that the sample complexity scaling linearly in d is necessary and sufficient to achieve a small prediction error, while the corresponding computational complexity may scale exponentially in d.
We devise a kernel-based learning model capable of trading off prediction error and computational complexity, transitioning from exponential to scaling in many practical settings.
arXiv Detail & Related papers (2024-08-22T08:21:28Z) - Supervised binary classification of small-scale digits images with a trapped-ion quantum processor [56.089799129458875]
We show that a quantum processor can correctly solve the basic classification task considered.
With the increase of the capabilities quantum processors, they can become a useful tool for machine learning.
arXiv Detail & Related papers (2024-06-17T18:20:51Z) - Data re-uploading with a single qudit [1.0923877073891446]
Two-level quantum systems, i.e. qubits, form the basis for most quantum machine learning approaches.
We explore the capabilities of multi-level quantum systems, so-called qudits, for their use in a quantum machine learning context.
arXiv Detail & Related papers (2023-02-27T16:32:16Z) - Ensemble-learning variational shallow-circuit quantum classifiers [4.104704267247209]
We propose two ensemble-learning classification methods, namely bootstrap aggregating and adaptive boosting.
The protocols have been exemplified for classical handwriting digits as well as quantum phase discrimination of a symmetry-protected topological Hamiltonian.
arXiv Detail & Related papers (2023-01-30T07:26:35Z) - Quantum algorithm for neural network enhanced multi-class parallel
classification [0.3314882635954752]
The proposed algorithm has a higher classification accuracy, faster convergence and higher expression ability.
For a classification task of $L$-class, the analysis shows that the space and time complexity of the quantum circuit are $O(L*logL)$ and $O(logL)$, respectively.
arXiv Detail & Related papers (2022-03-08T14:06:13Z) - Mixed Precision Low-bit Quantization of Neural Network Language Models
for Speech Recognition [67.95996816744251]
State-of-the-art language models (LMs) represented by long-short term memory recurrent neural networks (LSTM-RNNs) and Transformers are becoming increasingly complex and expensive for practical applications.
Current quantization methods are based on uniform precision and fail to account for the varying performance sensitivity at different parts of LMs to quantization errors.
Novel mixed precision neural network LM quantization methods are proposed in this paper.
arXiv Detail & Related papers (2021-11-29T12:24:02Z) - Post-Training Quantization for Vision Transformer [85.57953732941101]
We present an effective post-training quantization algorithm for reducing the memory storage and computational costs of vision transformers.
We can obtain an 81.29% top-1 accuracy using DeiT-B model on ImageNet dataset with about 8-bit quantization.
arXiv Detail & Related papers (2021-06-27T06:27:22Z) - Robust quantum classifier with minimal overhead [0.8057006406834467]
Several quantum algorithms for binary classification based on the kernel method have been proposed.
These algorithms rely on estimating an expectation value, which in turn requires an expensive quantum data encoding procedure to be repeated many times.
We show that the kernel-based binary classification can be performed with a single-qubit measurement regardless of the number and the dimension of the data.
arXiv Detail & Related papers (2021-04-16T14:51:00Z) - Efficient and robust certification of genuine multipartite entanglement
in noisy quantum error correction circuits [58.720142291102135]
We introduce a conditional witnessing technique to certify genuine multipartite entanglement (GME)
We prove that the detection of entanglement in a linear number of bipartitions by a number of measurements scales linearly, suffices to certify GME.
We apply our method to the noisy readout of stabilizer operators of the distance-three topological color code and its flag-based fault-tolerant version.
arXiv Detail & Related papers (2020-10-06T18:00:07Z) - Rapid characterisation of linear-optical networks via PhaseLift [51.03305009278831]
Integrated photonics offers great phase-stability and can rely on the large scale manufacturability provided by the semiconductor industry.
New devices, based on such optical circuits, hold the promise of faster and energy-efficient computations in machine learning applications.
We present a novel technique to reconstruct the transfer matrix of linear optical networks.
arXiv Detail & Related papers (2020-10-01T16:04:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.