BenchRL-QAS: Benchmarking reinforcement learning algorithms for quantum architecture search
- URL: http://arxiv.org/abs/2507.12189v2
- Date: Sat, 27 Sep 2025 08:44:30 GMT
- Title: BenchRL-QAS: Benchmarking reinforcement learning algorithms for quantum architecture search
- Authors: Azhar Ikhtiarudin, Aditi Das, Param Thakkar, Akash Kundu,
- Abstract summary: We present BenchRL-QAS, a unified benchmarking framework for reinforcement learning in quantum architecture search (QAS)<n>Our study systematically evaluates 9 different RL agents, including both value-based and policy-gradient methods, on quantum problems.<n>Results demonstrate that no single RL method dominates, the performance dependents on task type, qubit count, and noise conditions.<n>As a byproduct we observe that a carefully chosen RL algorithm in RL-based VQC outperforms baseline VQCs.
- Score: 1.5743861420663843
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present BenchRL-QAS, a unified benchmarking framework for reinforcement learning (RL) in quantum architecture search (QAS) across a spectrum of variational quantum algorithm tasks on 2- to 8-qubit systems. Our study systematically evaluates 9 different RL agents, including both value-based and policy-gradient methods, on quantum problems such as variational eigensolver, quantum state diagonalization, variational quantum classification (VQC), and state preparation, under both noiseless and noisy execution settings. To ensure fair comparison, we propose a weighted ranking metric that integrates accuracy, circuit depth, gate count, and training time. Results demonstrate that no single RL method dominates universally, the performance dependents on task type, qubit count, and noise conditions providing strong evidence of no free lunch principle in RL-QAS. As a byproduct we observe that a carefully chosen RL algorithm in RL-based VQC outperforms baseline VQCs. BenchRL-QAS establishes the most extensive benchmark for RL-based QAS to date, codes and experimental made publicly available for reproducibility and future advances.
Related papers
- Variational Quantum Circuit-Based Reinforcement Learning for Dynamic Portfolio Optimization [7.349651640835185]
This paper presents a Quantum Reinforcement Learning solution to the dynamic portfolio optimization problem based on Variational Quantum Circuits.<n>We show that our quantum agents achieve risk-adjusted performance comparable to, and in some cases exceeding, that of classical Deep RL models.
arXiv Detail & Related papers (2026-01-20T15:17:24Z) - QeRL: Beyond Efficiency -- Quantization-enhanced Reinforcement Learning for LLMs [80.76334908639745]
We propose QeRL, a Quantization-enhanced Reinforcement Learning framework for large language models (LLMs)<n>QeRL addresses issues by combining NVFP4 quantization with Low-Rank Adaptation (LoRA)<n>Experiments demonstrate that QeRL delivers over 1.5 times speedup in the rollout phase.
arXiv Detail & Related papers (2025-10-13T17:55:09Z) - CleanQRL: Lightweight Single-file Implementations of Quantum Reinforcement Learning Algorithms [2.536162003546062]
CleanQRL is a library that offers single-script implementations of many Quantum Reinforcement Learning algorithms.<n>Our library provides clear and easy to understand scripts that researchers can quickly adapt to their own needs.
arXiv Detail & Related papers (2025-07-10T09:53:39Z) - RhoDARTS: Differentiable Quantum Architecture Search with Density Matrix Simulations [48.670876200492415]
Variational Quantum Algorithms (VQAs) are a promising approach for leveraging powerful Noisy Intermediate-Scale Quantum (NISQ) computers.<n>We propose $rho$DARTS, a differentiable Quantum Architecture Search (QAS) algorithm that models the search process as the evolution of a quantum mixed state.
arXiv Detail & Related papers (2025-06-04T08:30:35Z) - Provably Robust Training of Quantum Circuit Classifiers Against Parameter Noise [49.97673761305336]
Noise remains a major obstacle to achieving reliable quantum algorithms.<n>We present a provably noise-resilient training theory and algorithm to enhance the robustness of parameterized quantum circuit classifiers.
arXiv Detail & Related papers (2025-05-24T02:51:34Z) - Benchmarking Quantum Reinforcement Learning [2.536162003546062]
Quantum Reinforcement Learning (QRL) has emerged as a promising research field, leveraging the principles of quantum mechanics to enhance the performance of reinforcement learning (RL) algorithms.<n>It is still uncertain if QRL can show any advantage over classical RL beyond artificial problem formulations.<n>It is not yet clear which streams of QRL research show the greatest potential.
arXiv Detail & Related papers (2025-02-07T13:28:20Z) - Performance Benchmarking of Quantum Algorithms for Hard Combinatorial Optimization Problems: A Comparative Study of non-FTQC Approaches [0.0]
This study systematically benchmarks several non-fault-tolerant quantum computing algorithms across four distinct optimization problems.
Our benchmark includes noisy intermediate-scale quantum (NISQ) algorithms, such as the variational quantum eigensolver.
Our findings reveal that no single non-FTQC algorithm performs optimally across all problem types, underscoring the need for tailored algorithmic strategies.
arXiv Detail & Related papers (2024-10-30T08:41:29Z) - OGBench: Benchmarking Offline Goal-Conditioned RL [72.00291801676684]
offline goal-conditioned reinforcement learning (GCRL) is a major problem in reinforcement learning.<n>We propose OGBench, a new, high-quality benchmark for algorithms research in offline goal-conditioned RL.
arXiv Detail & Related papers (2024-10-26T06:06:08Z) - SF-DQN: Provable Knowledge Transfer using Successor Feature for Deep Reinforcement Learning [89.04776523010409]
This paper studies the transfer reinforcement learning (RL) problem where multiple RL problems have different reward functions but share the same underlying transition dynamics.
In this setting, the Q-function of each RL problem (task) can be decomposed into a successor feature (SF) and a reward mapping.
We establish the first convergence analysis with provable generalization guarantees for SF-DQN with GPI.
arXiv Detail & Related papers (2024-05-24T20:30:14Z) - A quantum information theoretic analysis of reinforcement learning-assisted quantum architecture search [0.0]
This study investigates RL-QAS for crafting ansatz tailored to variational quantum state diagonalisation problem.
We leverage these insights to devise an entanglement-guided admissible ansatz in QAS to diagonalise random quantum states using optimal resources.
arXiv Detail & Related papers (2024-04-09T09:54:59Z) - Quantum Subroutine for Variance Estimation: Algorithmic Design and Applications [80.04533958880862]
Quantum computing sets the foundation for new ways of designing algorithms.
New challenges arise concerning which field quantum speedup can be achieved.
Looking for the design of quantum subroutines that are more efficient than their classical counterpart poses solid pillars to new powerful quantum algorithms.
arXiv Detail & Related papers (2024-02-26T09:32:07Z) - Unifying (Quantum) Statistical and Parametrized (Quantum) Algorithms [65.268245109828]
We take inspiration from Kearns' SQ oracle and Valiant's weak evaluation oracle.
We introduce an extensive yet intuitive framework that yields unconditional lower bounds for learning from evaluation queries.
arXiv Detail & Related papers (2023-10-26T18:23:21Z) - Quantum Annealing for Single Image Super-Resolution [86.69338893753886]
We propose a quantum computing-based algorithm to solve the single image super-resolution (SISR) problem.
The proposed AQC-based algorithm is demonstrated to achieve improved speed-up over a classical analog while maintaining comparable SISR accuracy.
arXiv Detail & Related papers (2023-04-18T11:57:15Z) - Reinforcement Learning Quantum Local Search [0.0]
We propose a reinforcement learning (RL) based approach to train an agent for improved subproblem selection in Quantum Local Search (QLS)
Our results demonstrate that the RL agent effectively enhances the average approximation ratio of QLS on fully-connected random Ising problems.
arXiv Detail & Related papers (2023-04-13T13:07:19Z) - Asynchronous training of quantum reinforcement learning [0.8702432681310399]
A leading method of building quantum RL agents relies on the variational quantum circuits (VQCs)
In this paper, we approach this challenge through asynchronous training QRL agents.
We demonstrate the results via numerical simulations that within the tasks considered, the asynchronous training of QRL agents can reach performance comparable to or superior.
arXiv Detail & Related papers (2023-01-12T15:54:44Z) - LCRL: Certified Policy Synthesis via Logically-Constrained Reinforcement
Learning [78.2286146954051]
LCRL implements model-free Reinforcement Learning (RL) algorithms over unknown Decision Processes (MDPs)
We present case studies to demonstrate the applicability, ease of use, scalability, and performance of LCRL.
arXiv Detail & Related papers (2022-09-21T13:21:00Z) - Quantum agents in the Gym: a variational quantum algorithm for deep
Q-learning [0.0]
We introduce a training method for parametrized quantum circuits (PQCs) that can be used to solve RL tasks for discrete and continuous state spaces.
We investigate which architectural choices for quantum Q-learning agents are most important for successfully solving certain types of environments.
arXiv Detail & Related papers (2021-03-28T08:57:22Z) - Quantum circuit architecture search for variational quantum algorithms [88.71725630554758]
We propose a resource and runtime efficient scheme termed quantum architecture search (QAS)
QAS automatically seeks a near-optimal ansatz to balance benefits and side-effects brought by adding more noisy quantum gates.
We implement QAS on both the numerical simulator and real quantum hardware, via the IBM cloud, to accomplish data classification and quantum chemistry tasks.
arXiv Detail & Related papers (2020-10-20T12:06:27Z) - SUNRISE: A Simple Unified Framework for Ensemble Learning in Deep
Reinforcement Learning [102.78958681141577]
We present SUNRISE, a simple unified ensemble method, which is compatible with various off-policy deep reinforcement learning algorithms.
SUNRISE integrates two key ingredients: (a) ensemble-based weighted Bellman backups, which re-weight target Q-values based on uncertainty estimates from a Q-ensemble, and (b) an inference method that selects actions using the highest upper-confidence bounds for efficient exploration.
arXiv Detail & Related papers (2020-07-09T17:08:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.