Quantum deep Q learning with distributed prioritized experience replay
- URL: http://arxiv.org/abs/2304.09648v1
- Date: Wed, 19 Apr 2023 13:40:44 GMT
- Title: Quantum deep Q learning with distributed prioritized experience replay
- Authors: Samuel Yen-Chi Chen
- Abstract summary: The framework incorporates prioritized experience replay and asynchronous training into the training algorithm to reduce the high sampling complexities.
Numerical simulations demonstrate that QDQN-DPER outperforms the baseline distributed quantum Q learning with the same model architecture.
- Score: 0.8702432681310399
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper introduces the QDQN-DPER framework to enhance the efficiency of
quantum reinforcement learning (QRL) in solving sequential decision tasks. The
framework incorporates prioritized experience replay and asynchronous training
into the training algorithm to reduce the high sampling complexities. Numerical
simulations demonstrate that QDQN-DPER outperforms the baseline distributed
quantum Q learning with the same model architecture. The proposed framework
holds potential for more complex tasks while maintaining training efficiency.
Related papers
- Quantum-Train-Based Distributed Multi-Agent Reinforcement Learning [5.673361333697935]
Quantum-Train-Based Distributed Multi-Agent Reinforcement Learning (Dist-QTRL)
We introduce Quantum-Train-Based Distributed Multi-Agent Reinforcement Learning (Dist-QTRL)
arXiv Detail & Related papers (2024-12-12T00:51:41Z) - Differentiable Quantum Architecture Search in Asynchronous Quantum Reinforcement Learning [3.6881738506505988]
We propose differentiable quantum architecture search (DiffQAS) to enable trainable circuit parameters and structure weights.
We show that our proposed DiffQAS-QRL approach achieves performance comparable to manually-crafted circuit architectures.
arXiv Detail & Related papers (2024-07-25T17:11:00Z) - SF-DQN: Provable Knowledge Transfer using Successor Feature for Deep Reinforcement Learning [89.04776523010409]
This paper studies the transfer reinforcement learning (RL) problem where multiple RL problems have different reward functions but share the same underlying transition dynamics.
In this setting, the Q-function of each RL problem (task) can be decomposed into a successor feature (SF) and a reward mapping.
We establish the first convergence analysis with provable generalization guarantees for SF-DQN with GPI.
arXiv Detail & Related papers (2024-05-24T20:30:14Z) - Pointer Networks with Q-Learning for Combinatorial Optimization [55.2480439325792]
We introduce the Pointer Q-Network (PQN), a hybrid neural architecture that integrates model-free Q-value policy approximation with Pointer Networks (Ptr-Nets)
Our empirical results demonstrate the efficacy of this approach, also testing the model in unstable environments.
arXiv Detail & Related papers (2023-11-05T12:03:58Z) - Efficient quantum recurrent reinforcement learning via quantum reservoir
computing [3.6881738506505988]
Quantum reinforcement learning (QRL) has emerged as a framework to solve sequential decision-making tasks.
This work presents a novel approach to address this challenge by constructing QRL agents utilizing QRNN-based quantum long short-term memory (QLSTM)
arXiv Detail & Related papers (2023-09-13T22:18:38Z) - Quantum Imitation Learning [74.15588381240795]
We propose quantum imitation learning (QIL) with a hope to utilize quantum advantage to speed up IL.
We develop two QIL algorithms, quantum behavioural cloning (Q-BC) and quantum generative adversarial imitation learning (Q-GAIL)
Experiment results demonstrate that both Q-BC and Q-GAIL can achieve comparable performance compared to classical counterparts.
arXiv Detail & Related papers (2023-04-04T12:47:35Z) - Asynchronous training of quantum reinforcement learning [0.8702432681310399]
A leading method of building quantum RL agents relies on the variational quantum circuits (VQCs)
In this paper, we approach this challenge through asynchronous training QRL agents.
We demonstrate the results via numerical simulations that within the tasks considered, the asynchronous training of QRL agents can reach performance comparable to or superior.
arXiv Detail & Related papers (2023-01-12T15:54:44Z) - Evolutionary Quantum Architecture Search for Parametrized Quantum
Circuits [7.298440208725654]
We introduce EQAS-PQC, an evolutionary quantum architecture search framework for PQC-based models.
We show that our method can significantly improve the performance of hybrid quantum-classical models.
arXiv Detail & Related papers (2022-08-23T19:47:37Z) - Quantum circuit architecture search on a superconducting processor [56.04169357427682]
Variational quantum algorithms (VQAs) have shown strong evidences to gain provable computational advantages for diverse fields such as finance, machine learning, and chemistry.
However, the ansatz exploited in modern VQAs is incapable of balancing the tradeoff between expressivity and trainability.
We demonstrate the first proof-of-principle experiment of applying an efficient automatic ansatz design technique to enhance VQAs on an 8-qubit superconducting quantum processor.
arXiv Detail & Related papers (2022-01-04T01:53:42Z) - Quantum circuit architecture search for variational quantum algorithms [88.71725630554758]
We propose a resource and runtime efficient scheme termed quantum architecture search (QAS)
QAS automatically seeks a near-optimal ansatz to balance benefits and side-effects brought by adding more noisy quantum gates.
We implement QAS on both the numerical simulator and real quantum hardware, via the IBM cloud, to accomplish data classification and quantum chemistry tasks.
arXiv Detail & Related papers (2020-10-20T12:06:27Z) - Cross Learning in Deep Q-Networks [82.20059754270302]
We propose a novel cross Q-learning algorithm, aim at alleviating the well-known overestimation problem in value-based reinforcement learning methods.
Our algorithm builds on double Q-learning, by maintaining a set of parallel models and estimate the Q-value based on a randomly selected network.
arXiv Detail & Related papers (2020-09-29T04:58:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.