Machine learning of high dimensional data on a noisy quantum processor
- URL: http://arxiv.org/abs/2101.09581v1
- Date: Sat, 23 Jan 2021 20:36:44 GMT
- Title: Machine learning of high dimensional data on a noisy quantum processor
- Authors: Evan Peters, Jo\~ao Caldeira, Alan Ho, Stefan Leichenauer, Masoud
Mohseni, Hartmut Neven, Panagiotis Spentzouris, Doug Strain, Gabriel N.
Perdue
- Abstract summary: We present a quantum kernel method for high-dimensional data analysis using Google's universal quantum processor, Sycamore.
This method is successfully applied to the cosmological benchmark of supernova classification using real spectral features with no dimensionality reduction and without vanishing kernel elements.
- Score: 0.3372751145910977
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a quantum kernel method for high-dimensional data analysis using
Google's universal quantum processor, Sycamore. This method is successfully
applied to the cosmological benchmark of supernova classification using real
spectral features with no dimensionality reduction and without vanishing kernel
elements. Instead of using a synthetic dataset of low dimension or
pre-processing the data with a classical machine learning algorithm to reduce
the data dimension, this experiment demonstrates that machine learning with
real, high dimensional data is possible using a quantum processor; but it
requires careful attention to shot statistics and mean kernel element size when
constructing a circuit ansatz. Our experiment utilizes 17 qubits to classify 67
dimensional data - significantly higher dimensionality than the largest prior
quantum kernel experiments - resulting in classification accuracy that is
competitive with noiseless simulation and comparable classical techniques.
Related papers
- Learning Reduced Representations for Quantum Classifiers [1.4446723310060385]
We apply dimensionality reduction methods to a particle physics data set to train a quantum support vector machine.<n>We show that the autoencoder methods learn a better lower-dimensional representation of the data, with the method we design, the Sinkclass autoencoder, performing 40% better than the baseline.
arXiv Detail & Related papers (2025-12-01T10:34:41Z) - Experimental Machine Learning with Classical and Quantum Data via NMR Quantum Kernels [0.0]
We implement quantum kernels on a 10-qubit star-topology register in a nuclear magnetic resonance (NMR) platform.
We experimentally encode classical data in the evolution of multiple quantum coherence orders using data-dependent unitary transformations.
Our results show that this kernel exhibits an ability to generalize well over unseen data.
arXiv Detail & Related papers (2024-12-12T18:44:38Z) - Enhancing the performance of Variational Quantum Classifiers with hybrid autoencoders [0.0]
We propose an alternative method which reduces the dimensionality of a given dataset by taking into account the specific quantum embedding that comes after.
This method aspires to make quantum machine learning with VQCs more versatile and effective on datasets of high dimension.
arXiv Detail & Related papers (2024-09-05T08:51:20Z) - Efficient Learning for Linear Properties of Bounded-Gate Quantum Circuits [62.46800898243033]
Recent progress in quantum learning theory prompts a question: can linear properties of a large-qubit circuit be efficiently learned from measurement data generated by varying classical inputs?<n>We prove that the sample complexity scaling linearly in $d$ is required to achieve a small prediction error, while the corresponding computational complexity may scale exponentially in d.<n>We propose a kernel-based method leveraging classical shadows and truncated trigonometric expansions, enabling a controllable trade-off between prediction accuracy and computational overhead.
arXiv Detail & Related papers (2024-08-22T08:21:28Z) - The curse of random quantum data [62.24825255497622]
We quantify the performances of quantum machine learning in the landscape of quantum data.
We find that the training efficiency and generalization capabilities in quantum machine learning will be exponentially suppressed with the increase in qubits.
Our findings apply to both the quantum kernel method and the large-width limit of quantum neural networks.
arXiv Detail & Related papers (2024-08-19T12:18:07Z) - Supervised binary classification of small-scale digit images and weighted graphs with a trapped-ion quantum processor [56.089799129458875]
We present the results of benchmarking a quantum processor based on trapped $171$Yb$+$ ions.<n>We perform a supervised binary classification on two types of datasets: small binary digit images and weighted graphs with a ring topology.
arXiv Detail & Related papers (2024-06-17T18:20:51Z) - Guided Quantum Compression for Higgs Identification [0.0]
Quantum machine learning provides a fundamentally novel and promising approach to analyzing data.
We show that using a classical auto-encoder as an independent preprocessing step can significantly decrease the classification performance of a quantum machine learning algorithm.
We design an architecture that unifies the preprocessing and quantum classification algorithms into a single trainable model: the guided quantum compression model.
arXiv Detail & Related papers (2024-02-14T19:01:51Z) - Neural auto-designer for enhanced quantum kernels [59.616404192966016]
We present a data-driven approach that automates the design of problem-specific quantum feature maps.
Our work highlights the substantial role of deep learning in advancing quantum machine learning.
arXiv Detail & Related papers (2024-01-20T03:11:59Z) - Higher-order topological kernels via quantum computation [68.8204255655161]
Topological data analysis (TDA) has emerged as a powerful tool for extracting meaningful insights from complex data.
We propose a quantum approach to defining Betti kernels, which is based on constructing Betti curves with increasing order.
arXiv Detail & Related papers (2023-07-14T14:48:52Z) - A didactic approach to quantum machine learning with a single qubit [68.8204255655161]
We focus on the case of learning with a single qubit, using data re-uploading techniques.
We implement the different proposed formulations in toy and real-world datasets using the qiskit quantum computing SDK.
arXiv Detail & Related papers (2022-11-23T18:25:32Z) - Fitting a Collider in a Quantum Computer: Tackling the Challenges of
Quantum Machine Learning for Big Datasets [0.0]
Feature and data prototype selection techniques were studied to tackle this challenge.
A grid search was performed and quantum machine learning models were trained and benchmarked against classical shallow machine learning methods.
The performance of the quantum algorithms was found to be comparable to the classical ones, even when using large datasets.
arXiv Detail & Related papers (2022-11-06T22:45:37Z) - Parametric t-Stochastic Neighbor Embedding With Quantum Neural Network [0.6946929968559495]
t-Stochastic Neighbor Embedding (t-SNE) is a non-parametric data visualization method in classical machine learning.
We propose to use quantum neural networks for parametric t-SNE to reflect the characteristics of high-dimensional quantum data on low-dimensional data.
arXiv Detail & Related papers (2022-02-09T02:49:54Z) - Large-scale quantum machine learning [0.0]
We measure quantum kernels using randomized measurements to gain a quadratic speedup in time and quickly process large datasets.
We efficiently encode high-dimensional data into quantum computers with the number of features scaling linearly with the circuit depth.
Using currently available quantum computers, the MNIST database can be processed within 220 hours instead of 10 years.
arXiv Detail & Related papers (2021-08-02T17:00:18Z) - Quantum Algorithms for Data Representation and Analysis [68.754953879193]
We provide quantum procedures that speed-up the solution of eigenproblems for data representation in machine learning.
The power and practical use of these subroutines is shown through new quantum algorithms, sublinear in the input matrix's size, for principal component analysis, correspondence analysis, and latent semantic analysis.
Results show that the run-time parameters that do not depend on the input's size are reasonable and that the error on the computed model is small, allowing for competitive classification performances.
arXiv Detail & Related papers (2021-04-19T00:41:43Z) - Nearest Centroid Classification on a Trapped Ion Quantum Computer [57.5195654107363]
We design a quantum Nearest Centroid classifier, using techniques for efficiently loading classical data into quantum states and performing distance estimations.
We experimentally demonstrate it on a 11-qubit trapped-ion quantum machine, matching the accuracy of classical nearest centroid classifiers for the MNIST handwritten digits dataset and achieving up to 100% accuracy for 8-dimensional synthetic data.
arXiv Detail & Related papers (2020-12-08T01:10:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.