QTRL: Toward Practical Quantum Reinforcement Learning via Quantum-Train
- URL: http://arxiv.org/abs/2407.06103v1
- Date: Mon, 8 Jul 2024 16:41:03 GMT
- Title: QTRL: Toward Practical Quantum Reinforcement Learning via Quantum-Train
- Authors: Chen-Yu Liu, Chu-Hsuan Abraham Lin, Chao-Han Huck Yang, Kuan-Cheng Chen, Min-Hsiu Hsieh,
- Abstract summary: We apply the Quantum-Train method to reinforcement learning tasks, called QTRL, training the classical policy network model.
The training result of the QTRL is a classical model, meaning the inference stage only requires classical computer.
- Score: 18.138290778243075
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Quantum reinforcement learning utilizes quantum layers to process information within a machine learning model. However, both pure and hybrid quantum reinforcement learning face challenges such as data encoding and the use of quantum computers during the inference stage. We apply the Quantum-Train method to reinforcement learning tasks, called QTRL, training the classical policy network model using a quantum machine learning model with polylogarithmic parameter reduction. This QTRL approach eliminates the data encoding issues of conventional quantum machine learning and reduces the training parameters of the corresponding classical policy network. Most importantly, the training result of the QTRL is a classical model, meaning the inference stage only requires classical computer. This is extremely practical and cost-efficient for reinforcement learning tasks, where low-latency feedback from the policy model is essential.
Related papers
- LatentQGAN: A Hybrid QGAN with Classical Convolutional Autoencoder [7.945302052915863]
A potential application of quantum machine learning is to harness the power of quantum computers for generating classical data.
We propose LatentQGAN, a novel quantum model that uses a hybrid quantum-classical GAN coupled with an autoencoder.
arXiv Detail & Related papers (2024-09-22T23:18:06Z) - Efficient Learning for Linear Properties of Bounded-Gate Quantum Circuits [63.733312560668274]
Given a quantum circuit containing d tunable RZ gates and G-d Clifford gates, can a learner perform purely classical inference to efficiently predict its linear properties?
We prove that the sample complexity scaling linearly in d is necessary and sufficient to achieve a small prediction error, while the corresponding computational complexity may scale exponentially in d.
We devise a kernel-based learning model capable of trading off prediction error and computational complexity, transitioning from exponential to scaling in many practical settings.
arXiv Detail & Related papers (2024-08-22T08:21:28Z) - Quantum-Train: Rethinking Hybrid Quantum-Classical Machine Learning in the Model Compression Perspective [7.7063925534143705]
We introduce the Quantum-Train(QT) framework, a novel approach that integrates quantum computing with machine learning algorithms.
QT achieves remarkable results by employing a quantum neural network alongside a classical mapping model.
arXiv Detail & Related papers (2024-05-18T14:35:57Z) - Bridging Classical and Quantum Machine Learning: Knowledge Transfer From
Classical to Quantum Neural Networks Using Knowledge Distillation [0.0]
This paper introduces a new method to transfer knowledge from classical to quantum neural networks using knowledge distillation.
We adapt classical convolutional neural network (CNN) architectures like LeNet and AlexNet to serve as teacher networks.
Quantum models achieve an average accuracy improvement of 0.80% on the MNIST dataset and 5.40% on the more complex Fashion MNIST dataset.
arXiv Detail & Related papers (2023-11-23T05:06:43Z) - Shadows of quantum machine learning [2.236957801565796]
We introduce a new class of quantum models where quantum resources are only required during training, while the deployment of the trained model is classical.
We prove that this class of models is universal for classically-deployed quantum machine learning.
arXiv Detail & Related papers (2023-05-31T18:00:02Z) - Adapting Pre-trained Language Models for Quantum Natural Language
Processing [33.86835690434712]
We show that pre-trained representation can bring 50% to 60% increases to the capacity of end-to-end quantum models.
On quantum simulation experiments, the pre-trained representation can bring 50% to 60% increases to the capacity of end-to-end quantum models.
arXiv Detail & Related papers (2023-02-24T14:59:02Z) - TeD-Q: a tensor network enhanced distributed hybrid quantum machine
learning framework [59.07246314484875]
TeD-Q is an open-source software framework for quantum machine learning.
It seamlessly integrates classical machine learning libraries with quantum simulators.
It provides a graphical mode in which the quantum circuit and the training progress can be visualized in real-time.
arXiv Detail & Related papers (2023-01-13T09:35:05Z) - The Quantum Path Kernel: a Generalized Quantum Neural Tangent Kernel for
Deep Quantum Machine Learning [52.77024349608834]
Building a quantum analog of classical deep neural networks represents a fundamental challenge in quantum computing.
Key issue is how to address the inherent non-linearity of classical deep learning.
We introduce the Quantum Path Kernel, a formulation of quantum machine learning capable of replicating those aspects of deep machine learning.
arXiv Detail & Related papers (2022-12-22T16:06:24Z) - A didactic approach to quantum machine learning with a single qubit [68.8204255655161]
We focus on the case of learning with a single qubit, using data re-uploading techniques.
We implement the different proposed formulations in toy and real-world datasets using the qiskit quantum computing SDK.
arXiv Detail & Related papers (2022-11-23T18:25:32Z) - Optimizing Tensor Network Contraction Using Reinforcement Learning [86.05566365115729]
We propose a Reinforcement Learning (RL) approach combined with Graph Neural Networks (GNN) to address the contraction ordering problem.
The problem is extremely challenging due to the huge search space, the heavy-tailed reward distribution, and the challenging credit assignment.
We show how a carefully implemented RL-agent that uses a GNN as the basic policy construct can address these challenges.
arXiv Detail & Related papers (2022-04-18T21:45:13Z) - Quantum Federated Learning with Quantum Data [87.49715898878858]
Quantum machine learning (QML) has emerged as a promising field that leans on the developments in quantum computing to explore large complex machine learning problems.
This paper proposes the first fully quantum federated learning framework that can operate over quantum data and, thus, share the learning of quantum circuit parameters in a decentralized manner.
arXiv Detail & Related papers (2021-05-30T12:19:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.