QuanBench: Benchmarking Quantum Code Generation with Large Language Models
- URL: http://arxiv.org/abs/2510.16779v1
- Date: Sun, 19 Oct 2025 10:08:36 GMT
- Title: QuanBench: Benchmarking Quantum Code Generation with Large Language Models
- Authors: Xiaoyu Guo, Minggu Wang, Jianjun Zhao,
- Abstract summary: Large language models (LLMs) have demonstrated good performance in general code generation.<n>This paper presents QuanBench, a benchmark for evaluating LLMs on quantum code generation.
- Score: 7.807551490308163
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large language models (LLMs) have demonstrated good performance in general code generation; however, their capabilities in quantum code generation remain insufficiently studied. This paper presents QuanBench, a benchmark for evaluating LLMs on quantum code generation. QuanBench includes 44 programming tasks that cover quantum algorithms, state preparation, gate decomposition, and quantum machine learning. Each task has an executable canonical solution and is evaluated by functional correctness (Pass@K) and quantum semantic equivalence (Process Fidelity). We evaluate several recent LLMs, including general-purpose and code-specialized models. The results show that current LLMs have limited capability in generating the correct quantum code, with overall accuracy below 40% and frequent semantic errors. We also analyze common failure cases, such as outdated API usage, circuit construction errors, and incorrect algorithm logic. QuanBench provides a basis for future work on improving quantum code generation with LLMs.
Related papers
- QUASAR: Quantum Assembly Code Generation Using Tool-Augmented LLMs via Agentic RL [8.823588193058727]
Large language model (LLM)-based quantum circuit generation has emerged as a promising automatic solution.<n>We propose QUASAR, an agentic reinforcement learning framework for quantum circuits generation and optimization.
arXiv Detail & Related papers (2025-10-01T14:40:04Z) - LLM-Guided Ansätze Design for Quantum Circuit Born Machines in Financial Generative Modeling [0.9176056742068813]
Quantum generative modeling using quantum circuit Born machines (QCBMs) shows promising potential for practical quantum advantage.<n>We introduce a prompt-based framework that leverages large language models (LLMs) to generate hardware-aware QCBM architectures.<n>We show that the LLM-generated ans"atze are significantly shallower and achieve superior generative performance compared to the standard baseline when executed on real IBM quantum hardware using 12 qubits.
arXiv Detail & Related papers (2025-09-10T08:23:58Z) - Quantum Verifiable Rewards for Post-Training Qiskit Code Assistant [7.459767023316693]
We introduce quantum verification as an effective method for ensuring code quality and executability on quantum hardware.<n>We trained models using GRPO, leveraging quantum-verifiable rewards provided by the quantum hardware.
arXiv Detail & Related papers (2025-08-28T15:37:40Z) - Quantum Knowledge Distillation for Large Language Models [10.023534560183919]
We propose a Quantum knowledge Distillation model for Large Language Models (QD-LLM)<n>In classical simulation, QD-LLM outperforms several mainstream distillation methods on multiple text classification tasks.<n>We deploy the obtained circuits on the Baihua superconducting quantum processor via the Quafu platform to assess practical feasibility.
arXiv Detail & Related papers (2025-05-19T14:56:24Z) - An Efficient Quantum Classifier Based on Hamiltonian Representations [50.467930253994155]
Quantum machine learning (QML) is a discipline that seeks to transfer the advantages of quantum computing to data-driven tasks.<n>We propose an efficient approach that circumvents the costs associated with data encoding by mapping inputs to a finite set of Pauli strings.<n>We evaluate our approach on text and image classification tasks, against well-established classical and quantum models.
arXiv Detail & Related papers (2025-04-13T11:49:53Z) - QCircuitBench: A Large-Scale Dataset for Benchmarking Quantum Algorithm Design [63.02824918725805]
Quantum computing is recognized for the significant speedup it offers over classical computing through quantum algorithms.<n>QCircuitBench is the first benchmark dataset designed to evaluate AI's capability in designing and implementing quantum algorithms.
arXiv Detail & Related papers (2024-10-10T14:24:30Z) - Qiskit Code Assistant: Training LLMs for generating Quantum Computing Code [2.0108122340549985]
This paper focuses on training Code LLMs to specialize in the field of quantum computing.
A Code LLM specializing in quantum computing requires a foundational understanding of quantum computing and quantum information theory.
We discuss our work on training Code LLMs to produce high-quality quantum code using the Qiskit library.
arXiv Detail & Related papers (2024-05-29T20:21:00Z) - LLMC: Benchmarking Large Language Model Quantization with a Versatile Compression Toolkit [55.73370804397226]
Quantization, a key compression technique, can effectively mitigate these demands by compressing and accelerating large language models.
We present LLMC, a plug-and-play compression toolkit, to fairly and systematically explore the impact of quantization.
Powered by this versatile toolkit, our benchmark covers three key aspects: calibration data, algorithms (three strategies), and data formats.
arXiv Detail & Related papers (2024-05-09T11:49:05Z) - InfiBench: Evaluating the Question-Answering Capabilities of Code Large Language Models [56.723509505549536]
InfiBench is the first large-scale freeform question-answering (QA) benchmark for code to our knowledge.
It comprises 234 carefully selected high-quality Stack Overflow questions that span across 15 programming languages.
We conduct a systematic evaluation for over 100 latest code LLMs on InfiBench, leading to a series of novel and insightful findings.
arXiv Detail & Related papers (2024-03-11T02:06:30Z) - QuantumSEA: In-Time Sparse Exploration for Noise Adaptive Quantum
Circuits [82.50620782471485]
QuantumSEA is an in-time sparse exploration for noise-adaptive quantum circuits.
It aims to achieve two key objectives: (1) implicit circuits capacity during training and (2) noise robustness.
Our method establishes state-of-the-art results with only half the number of quantum gates and 2x time saving of circuit executions.
arXiv Detail & Related papers (2024-01-10T22:33:00Z) - Recent Advances for Quantum Neural Networks in Generative Learning [98.88205308106778]
Quantum generative learning models (QGLMs) may surpass their classical counterparts.
We review the current progress of QGLMs from the perspective of machine learning.
We discuss the potential applications of QGLMs in both conventional machine learning tasks and quantum physics.
arXiv Detail & Related papers (2022-06-07T07:32:57Z) - A MLIR Dialect for Quantum Assembly Languages [78.8942067357231]
We demonstrate the utility of the Multi-Level Intermediate Representation (MLIR) for quantum computing.
We extend MLIR with a new quantum dialect that enables the expression and compilation of common quantum assembly languages.
We leverage a qcor-enabled implementation of the QIR quantum runtime API to enable a retargetable (quantum hardware agnostic) compiler workflow.
arXiv Detail & Related papers (2021-01-27T13:00:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.