Concentration of Data Encoding in Parameterized Quantum Circuits
- URL: http://arxiv.org/abs/2206.08273v1
- Date: Thu, 16 Jun 2022 16:09:40 GMT
- Title: Concentration of Data Encoding in Parameterized Quantum Circuits
- Authors: Guangxi Li, Ruilin Ye, Xuanqiang Zhao, Xin Wang
- Abstract summary: Variational quantum algorithms have been acknowledged as a leading strategy to realize near-term quantum advantages in meaningful tasks.
In this paper, we make progress by considering the common data encoding strategies based on parameterized quantum circuits.
We prove that, under reasonable assumptions, the distance between the average encoded state and the maximally mixed state could be explicitly upper-bounded.
- Score: 7.534037755267707
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Variational quantum algorithms have been acknowledged as a leading strategy
to realize near-term quantum advantages in meaningful tasks, including machine
learning and combinatorial optimization. When applied to tasks involving
classical data, such algorithms generally begin with quantum circuits for data
encoding and then train quantum neural networks (QNNs) to minimize target
functions. Although QNNs have been widely studied to improve these algorithms'
performance on practical tasks, there is a gap in systematically understanding
the influence of data encoding on the eventual performance. In this paper, we
make progress in filling this gap by considering the common data encoding
strategies based on parameterized quantum circuits. We prove that, under
reasonable assumptions, the distance between the average encoded state and the
maximally mixed state could be explicitly upper-bounded with respect to the
width and depth of the encoding circuit. This result in particular implies that
the average encoded state will concentrate on the maximally mixed state at an
exponential speed on depth. Such concentration seriously limits the
capabilities of quantum classifiers, and strictly restricts the
distinguishability of encoded states from a quantum information perspective. We
further support our findings by numerically verifying these results on both
synthetic and public data sets. Our results highlight the significance of
quantum data encoding in machine learning tasks and may shed light on future
encoding strategies.
Related papers
- Empirical Power of Quantum Encoding Methods for Binary Classification [0.2118773996967412]
We will focus on encoding schemes and their effects on various machine learning metrics.
Specifically, we focus on real-world data encoding to demonstrate differences between quantum encoding strategies for several real-world datasets.
arXiv Detail & Related papers (2024-08-23T14:34:57Z) - Efficient Learning for Linear Properties of Bounded-Gate Quantum Circuits [63.733312560668274]
Given a quantum circuit containing d tunable RZ gates and G-d Clifford gates, can a learner perform purely classical inference to efficiently predict its linear properties?
We prove that the sample complexity scaling linearly in d is necessary and sufficient to achieve a small prediction error, while the corresponding computational complexity may scale exponentially in d.
We devise a kernel-based learning model capable of trading off prediction error and computational complexity, transitioning from exponential to scaling in many practical settings.
arXiv Detail & Related papers (2024-08-22T08:21:28Z) - The curse of random quantum data [62.24825255497622]
We quantify the performances of quantum machine learning in the landscape of quantum data.
We find that the training efficiency and generalization capabilities in quantum machine learning will be exponentially suppressed with the increase in qubits.
Our findings apply to both the quantum kernel method and the large-width limit of quantum neural networks.
arXiv Detail & Related papers (2024-08-19T12:18:07Z) - Understanding the effects of data encoding on quantum-classical convolutional neural networks [0.0]
A key component of quantum-enhanced methods is the data encoding strategy used to embed the classical data into quantum states.
This work investigates how the data encoding impacts the performance of a quantum-classical convolutional neural network (QCCNN) on two medical imaging datasets.
arXiv Detail & Related papers (2024-05-05T18:44:08Z) - Classification of the Fashion-MNIST Dataset on a Quantum Computer [0.0]
Conventional methods for encoding classical data into quantum computers are too costly and limit the scale of feasible experiments on current hardware.
We propose an improved variational algorithm that prepares the encoded data using circuits that fit the native gate set and topology of currently available quantum computers.
We deploy simple quantum variational classifiers trained on the encoded dataset on a current quantum computer ibmq-kolkata and achieve moderate accuracies.
arXiv Detail & Related papers (2024-03-04T19:01:14Z) - A Quantum-Classical Collaborative Training Architecture Based on Quantum
State Fidelity [50.387179833629254]
We introduce a collaborative classical-quantum architecture called co-TenQu.
Co-TenQu enhances a classical deep neural network by up to 41.72% in a fair setting.
It outperforms other quantum-based methods by up to 1.9 times and achieves similar accuracy while utilizing 70.59% fewer qubits.
arXiv Detail & Related papers (2024-02-23T14:09:41Z) - Variational data encoding and correlations in quantum-enhanced machine
learning [2.436161840735876]
We develop an effective encoding protocol for translating classical data into quantum states.
We also address the need to counteract the inevitable noise that can hinder quantum acceleration.
By adapting the learning concept from machine learning, we render data encoding a learnable process.
arXiv Detail & Related papers (2023-12-13T07:55:57Z) - Drastic Circuit Depth Reductions with Preserved Adversarial Robustness
by Approximate Encoding for Quantum Machine Learning [0.5181797490530444]
We implement methods for the efficient preparation of quantum states representing encoded image data using variational, genetic and matrix product state based algorithms.
Results show that these methods can approximately prepare states to a level suitable for QML using circuits two orders of magnitude shallower than a standard state preparation implementation.
arXiv Detail & Related papers (2023-09-18T01:49:36Z) - Near-Term Distributed Quantum Computation using Mean-Field Corrections
and Auxiliary Qubits [77.04894470683776]
We propose near-term distributed quantum computing that involve limited information transfer and conservative entanglement production.
We build upon these concepts to produce an approximate circuit-cutting technique for the fragmented pre-training of variational quantum algorithms.
arXiv Detail & Related papers (2023-09-11T18:00:00Z) - Quantum Imitation Learning [74.15588381240795]
We propose quantum imitation learning (QIL) with a hope to utilize quantum advantage to speed up IL.
We develop two QIL algorithms, quantum behavioural cloning (Q-BC) and quantum generative adversarial imitation learning (Q-GAIL)
Experiment results demonstrate that both Q-BC and Q-GAIL can achieve comparable performance compared to classical counterparts.
arXiv Detail & Related papers (2023-04-04T12:47:35Z) - Deep Quantum Error Correction [73.54643419792453]
Quantum error correction codes (QECC) are a key component for realizing the potential of quantum computing.
In this work, we efficiently train novel emphend-to-end deep quantum error decoders.
The proposed method demonstrates the power of neural decoders for QECC by achieving state-of-the-art accuracy.
arXiv Detail & Related papers (2023-01-27T08:16:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.