Large-scale quantum reservoir learning with an analog quantum computer
- URL: http://arxiv.org/abs/2407.02553v1
- Date: Tue, 2 Jul 2024 18:00:00 GMT
- Title: Large-scale quantum reservoir learning with an analog quantum computer
- Authors: Milan Kornjača, Hong-Ye Hu, Chen Zhao, Jonathan Wurtz, Phillip Weinberg, Majd Hamdan, Andrii Zhdanov, Sergio H. Cantu, Hengyun Zhou, Rodrigo Araiza Bravo, Kevin Bagnall, James I. Basham, Joseph Campo, Adam Choukri, Robert DeAngelo, Paige Frederick, David Haines, Julian Hammett, Ning Hsu, Ming-Guang Hu, Florian Huber, Paul Niklas Jepsen, Ningyuan Jia, Thomas Karolyshyn, Minho Kwon, John Long, Jonathan Lopatin, Alexander Lukin, Tommaso Macrì, Ognjen Marković, Luis A. Martínez-Martínez, Xianmei Meng, Evgeny Ostroumov, David Paquette, John Robinson, Pedro Sales Rodriguez, Anshuman Singh, Nandan Sinha, Henry Thoreen, Noel Wan, Daniel Waxman-Lenz, Tak Wong, Kai-Hsin Wu, Pedro L. S. Lopes, Yuval Boger, Nathan Gemelke, Takuya Kitagawa, Alexander Keesling, Xun Gao, Alexei Bylinskii, Susanne F. Yelin, Fangli Liu, Sheng-Tao Wang,
- Abstract summary: We develop a quantum reservoir learning algorithm that harnesses the quantum dynamics of neutral-atom analog quantum computers to process data.
We experimentally implement the algorithm, achieving competitive performance across various categories of machine learning tasks.
Our findings demonstrate the potential of utilizing classically intractable quantum correlations for effective machine learning.
- Score: 45.21335836399935
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Quantum machine learning has gained considerable attention as quantum technology advances, presenting a promising approach for efficiently learning complex data patterns. Despite this promise, most contemporary quantum methods require significant resources for variational parameter optimization and face issues with vanishing gradients, leading to experiments that are either limited in scale or lack potential for quantum advantage. To address this, we develop a general-purpose, gradient-free, and scalable quantum reservoir learning algorithm that harnesses the quantum dynamics of neutral-atom analog quantum computers to process data. We experimentally implement the algorithm, achieving competitive performance across various categories of machine learning tasks, including binary and multi-class classification, as well as timeseries prediction. Effective and improving learning is observed with increasing system sizes of up to 108 qubits, demonstrating the largest quantum machine learning experiment to date. We further observe comparative quantum kernel advantage in learning tasks by constructing synthetic datasets based on the geometric differences between generated quantum and classical data kernels. Our findings demonstrate the potential of utilizing classically intractable quantum correlations for effective machine learning. We expect these results to stimulate further extensions to different quantum hardware and machine learning paradigms, including early fault-tolerant hardware and generative machine learning tasks.
Related papers
- Quantum continual learning on a programmable superconducting processor [17.787742382926137]
We show that a quantum classifier can incrementally learn and retain knowledge across three distinct tasks.
Our results establish a viable strategy for empowering quantum learning systems with desirable adaptability to multiple sequential tasks.
arXiv Detail & Related papers (2024-09-15T13:16:56Z) - Quantum-Powered Personalized Learning [3.1523832615228295]
We review existing personalized learning systems, classical machine learning methods, and emerging quantum computing applications in education.
Our findings indicate that quantum algorithms offer substantial improvements in efficiency, scalability, and personalization quality compared to classical methods.
arXiv Detail & Related papers (2024-08-25T17:45:48Z) - Efficient Learning for Linear Properties of Bounded-Gate Quantum Circuits [63.733312560668274]
Given a quantum circuit containing d tunable RZ gates and G-d Clifford gates, can a learner perform purely classical inference to efficiently predict its linear properties?
We prove that the sample complexity scaling linearly in d is necessary and sufficient to achieve a small prediction error, while the corresponding computational complexity may scale exponentially in d.
We devise a kernel-based learning model capable of trading off prediction error and computational complexity, transitioning from exponential to scaling in many practical settings.
arXiv Detail & Related papers (2024-08-22T08:21:28Z) - The curse of random quantum data [62.24825255497622]
We quantify the performances of quantum machine learning in the landscape of quantum data.
We find that the training efficiency and generalization capabilities in quantum machine learning will be exponentially suppressed with the increase in qubits.
Our findings apply to both the quantum kernel method and the large-width limit of quantum neural networks.
arXiv Detail & Related papers (2024-08-19T12:18:07Z) - Quantum data learning for quantum simulations in high-energy physics [55.41644538483948]
We explore the applicability of quantum-data learning to practical problems in high-energy physics.
We make use of ansatz based on quantum convolutional neural networks and numerically show that it is capable of recognizing quantum phases of ground states.
The observation of non-trivial learning properties demonstrated in these benchmarks will motivate further exploration of the quantum-data learning architecture in high-energy physics.
arXiv Detail & Related papers (2023-06-29T18:00:01Z) - Quantum Machine Learning: from physics to software engineering [58.720142291102135]
We show how classical machine learning approach can help improve the facilities of quantum computers.
We discuss how quantum algorithms and quantum computers may be useful for solving classical machine learning tasks.
arXiv Detail & Related papers (2023-01-04T23:37:45Z) - Modern applications of machine learning in quantum sciences [51.09906911582811]
We cover the use of deep learning and kernel methods in supervised, unsupervised, and reinforcement learning algorithms.
We discuss more specialized topics such as differentiable programming, generative models, statistical approach to machine learning, and quantum machine learning.
arXiv Detail & Related papers (2022-04-08T17:48:59Z) - Storage properties of a quantum perceptron [0.0]
We investigate the storage capacity of a particular quantum perceptron architecture.
We focus on a specific quantum perceptron model and explore its storage properties in the limit of a large number of inputs.
arXiv Detail & Related papers (2021-11-16T12:32:34Z) - Power of data in quantum machine learning [2.1012068875084964]
We show that some problems that are classically hard to compute can be easily predicted by classical machines learning from data.
We propose a projected quantum model that provides a simple and rigorous quantum speed-up for a learning problem in the fault-tolerant regime.
arXiv Detail & Related papers (2020-11-03T19:00:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.