Training quantum machine learning models on cloud without uploading the data
- URL: http://arxiv.org/abs/2409.04602v2
- Date: Mon, 7 Oct 2024 20:19:38 GMT
- Title: Training quantum machine learning models on cloud without uploading the data
- Authors: Guang Ping He,
- Abstract summary: We propose a method that runs the parameterized quantum circuits before encoding the input data.
This enables a dataset owner to train machine learning models on quantum cloud platforms.
It is also capable of encoding a vast amount of data effectively at a later time using classical computations.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Based on the linearity of quantum unitary operations, we propose a method that runs the parameterized quantum circuits before encoding the input data. This enables a dataset owner to train machine learning models on quantum cloud computation platforms, without the risk of leaking the information about the data. It is also capable of encoding a vast amount of data effectively at a later time using classical computations, thus saving runtime on quantum computation devices. The trained quantum machine learning models can be run completely on classical computers, meaning the dataset owner does not need to have any quantum hardware, nor even quantum simulators. Moreover, our method mitigates the encoding bottleneck by reducing the required circuit depth from $O(2^{n})$ to $O(n)$, and relax the tolerance on the precision of the quantum gates for the encoding. These results demonstrate yet another advantage of quantum and quantum-inspired machine learning models over existing classical neural networks, and broaden the approaches to data security.
Related papers
- Efficient Learning for Linear Properties of Bounded-Gate Quantum Circuits [63.733312560668274]
Given a quantum circuit containing d tunable RZ gates and G-d Clifford gates, can a learner perform purely classical inference to efficiently predict its linear properties?
We prove that the sample complexity scaling linearly in d is necessary and sufficient to achieve a small prediction error, while the corresponding computational complexity may scale exponentially in d.
We devise a kernel-based learning model capable of trading off prediction error and computational complexity, transitioning from exponential to scaling in many practical settings.
arXiv Detail & Related papers (2024-08-22T08:21:28Z) - The curse of random quantum data [62.24825255497622]
We quantify the performances of quantum machine learning in the landscape of quantum data.
We find that the training efficiency and generalization capabilities in quantum machine learning will be exponentially suppressed with the increase in qubits.
Our findings apply to both the quantum kernel method and the large-width limit of quantum neural networks.
arXiv Detail & Related papers (2024-08-19T12:18:07Z) - Classification of the Fashion-MNIST Dataset on a Quantum Computer [0.0]
Conventional methods for encoding classical data into quantum computers are too costly and limit the scale of feasible experiments on current hardware.
We propose an improved variational algorithm that prepares the encoded data using circuits that fit the native gate set and topology of currently available quantum computers.
We deploy simple quantum variational classifiers trained on the encoded dataset on a current quantum computer ibmq-kolkata and achieve moderate accuracies.
arXiv Detail & Related papers (2024-03-04T19:01:14Z) - Quantum Machine Learning: from physics to software engineering [58.720142291102135]
We show how classical machine learning approach can help improve the facilities of quantum computers.
We discuss how quantum algorithms and quantum computers may be useful for solving classical machine learning tasks.
arXiv Detail & Related papers (2023-01-04T23:37:45Z) - A didactic approach to quantum machine learning with a single qubit [68.8204255655161]
We focus on the case of learning with a single qubit, using data re-uploading techniques.
We implement the different proposed formulations in toy and real-world datasets using the qiskit quantum computing SDK.
arXiv Detail & Related papers (2022-11-23T18:25:32Z) - Quantum neural networks [0.0]
This thesis combines two of the most exciting research areas of the last decades: quantum computing and machine learning.
We introduce dissipative quantum neural networks (DQNNs), which are capable of universal quantum computation and have low memory requirements while training.
arXiv Detail & Related papers (2022-05-17T07:47:00Z) - Generative Quantum Machine Learning [0.0]
The aim of this thesis is to develop new generative quantum machine learning algorithms.
We introduce a quantum generative adversarial network and a quantum Boltzmann machine implementation, both of which can be realized with parameterized quantum circuits.
arXiv Detail & Related papers (2021-11-24T19:00:21Z) - Comparing concepts of quantum and classical neural network models for
image classification task [0.456877715768796]
This material includes the results of experiments on training and performance of a hybrid quantum-classical neural network.
Although its simulation is time-consuming, the quantum network, although its simulation is time-consuming, overcomes the classical network.
arXiv Detail & Related papers (2021-08-19T18:49:30Z) - Large-scale quantum machine learning [0.0]
We measure quantum kernels using randomized measurements to gain a quadratic speedup in time and quickly process large datasets.
We efficiently encode high-dimensional data into quantum computers with the number of features scaling linearly with the circuit depth.
Using currently available quantum computers, the MNIST database can be processed within 220 hours instead of 10 years.
arXiv Detail & Related papers (2021-08-02T17:00:18Z) - A quantum algorithm for training wide and deep classical neural networks [72.2614468437919]
We show that conditions amenable to classical trainability via gradient descent coincide with those necessary for efficiently solving quantum linear systems.
We numerically demonstrate that the MNIST image dataset satisfies such conditions.
We provide empirical evidence for $O(log n)$ training of a convolutional neural network with pooling.
arXiv Detail & Related papers (2021-07-19T23:41:03Z) - Quantum Federated Learning with Quantum Data [87.49715898878858]
Quantum machine learning (QML) has emerged as a promising field that leans on the developments in quantum computing to explore large complex machine learning problems.
This paper proposes the first fully quantum federated learning framework that can operate over quantum data and, thus, share the learning of quantum circuit parameters in a decentralized manner.
arXiv Detail & Related papers (2021-05-30T12:19:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.