PennyCoder: Efficient Domain-Specific LLMs for PennyLane-Based Quantum Code Generation
- URL: http://arxiv.org/abs/2507.19562v1
- Date: Fri, 25 Jul 2025 12:02:49 GMT
- Title: PennyCoder: Efficient Domain-Specific LLMs for PennyLane-Based Quantum Code Generation
- Authors: Abdul Basit, Minghao Shao, Muhammad Haider Asif, Nouhaila Innan, Muhammad Kashif, Alberto Marchisio, Muhammad Shafique,
- Abstract summary: PennyCoder is a novel framework for quantum code generation designed for local and embedded deployment.<n>Our approach emphasizes device-native operability while maintaining high model efficacy.<n>We rigorously evaluated PennyCoder over a comprehensive quantum programming dataset, achieving 44.3% accuracy.
- Score: 4.826802034066811
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The growing demand for robust quantum programming frameworks has unveiled a critical limitation: current large language model (LLM) based quantum code assistants heavily rely on remote APIs, introducing challenges related to privacy, latency, and excessive usage costs. Addressing this gap, we propose PennyCoder, a novel lightweight framework for quantum code generation, explicitly designed for local and embedded deployment to enable on-device quantum programming assistance without external API dependence. PennyCoder leverages a fine-tuned version of the LLaMA 3.1-8B model, adapted through parameter-efficient Low-Rank Adaptation (LoRA) techniques combined with domain-specific instruction tuning optimized for the specialized syntax and computational logic of quantum programming in PennyLane, including tasks in quantum machine learning and quantum reinforcement learning. Unlike prior work focused on cloud-based quantum code generation, our approach emphasizes device-native operability while maintaining high model efficacy. We rigorously evaluated PennyCoder over a comprehensive quantum programming dataset, achieving 44.3% accuracy with our fine-tuned model (compared to 33.7% for the base LLaMA 3.1-8B and 40.1% for the RAG-augmented baseline), demonstrating a significant improvement in functional correctness.
Related papers
- QUASAR: Quantum Assembly Code Generation Using Tool-Augmented LLMs via Agentic RL [8.823588193058727]
Large language model (LLM)-based quantum circuit generation has emerged as a promising automatic solution.<n>We propose QUASAR, an agentic reinforcement learning framework for quantum circuits generation and optimization.
arXiv Detail & Related papers (2025-10-01T14:40:04Z) - Reinforcement Learning for Quantum Network Control with Application-Driven Objectives [53.03367590211247]
Dynamic programming and reinforcement learning offer promising tools for optimizing control strategies.<n>We propose a novel RL framework that directly optimize non-linear, differentiable objective functions.<n>Our work comprises the first step towards non-linear objective function optimization in quantum networks with RL, opening a path towards more advanced use cases.
arXiv Detail & Related papers (2025-09-12T18:41:10Z) - QFOR: A Fidelity-aware Orchestrator for Quantum Computing Environments using Deep Reinforcement Learning [19.006907700170693]
Quantum cloud computing enables remote access to quantum processors, yet the heterogeneity and noise of quantum hardware complicates resource orchestration.<n>Here, we propose QFOR, a Quantum Fidelityaware Orchestration of tasks across heterogeneous quantum nodes in cloud-based environments using Deep Reinforcement learning.<n>Our framework balances overall quantum task execution fidelity and time, enabling adaptation to different operational priorities.
arXiv Detail & Related papers (2025-08-07T02:00:50Z) - VQC-MLPNet: An Unconventional Hybrid Quantum-Classical Architecture for Scalable and Robust Quantum Machine Learning [60.996803677584424]
Variational Quantum Circuits (VQCs) offer a novel pathway for quantum machine learning.<n>Their practical application is hindered by inherent limitations such as constrained linear expressivity, optimization challenges, and acute sensitivity to quantum hardware noise.<n>This work introduces VQC-MLPNet, a scalable and robust hybrid quantum-classical architecture designed to overcome these obstacles.
arXiv Detail & Related papers (2025-06-12T01:38:15Z) - Fast Machine Learning for Quantum Control of Microwave Qudits on Edge Hardware [1.4029771773420519]
It is paramount to have systems with extremely low delays to quickly adjust quantum hardware settings, where fidelity is defined as overlap with a target quantum state.<n>Here, we utilize machine learning (ML) models to determine control-pulse parameters for preparing Selective Number-dependent Arbitrary Phase (SNAP) gates in microwave cavity qudits.<n>Our results demonstrate the efficacy of the proposed approach, with optimized models achieving low gate trace infidelity near $10-3$ and efficient utilization of programmable logic resources.
arXiv Detail & Related papers (2025-06-03T19:14:18Z) - Minimal Quantum Reservoirs with Hamiltonian Encoding [72.27323884094953]
We investigate a minimal architecture for quantum reservoir computing based on Hamiltonian encoding.<n>This approach circumvents many of the experimental overheads typically associated with quantum machine learning.
arXiv Detail & Related papers (2025-05-28T16:50:05Z) - Provably Robust Training of Quantum Circuit Classifiers Against Parameter Noise [49.97673761305336]
Noise remains a major obstacle to achieving reliable quantum algorithms.<n>We present a provably noise-resilient training theory and algorithm to enhance the robustness of parameterized quantum circuit classifiers.
arXiv Detail & Related papers (2025-05-24T02:51:34Z) - QuartDepth: Post-Training Quantization for Real-Time Depth Estimation on the Edge [55.75103034526652]
We propose QuartDepth which adopts post-training quantization to quantize MDE models with hardware accelerations for ASICs.<n>Our approach involves quantizing both weights and activations to 4-bit precision, reducing the model size and computation cost.<n>We design a flexible and programmable hardware accelerator by supporting kernel fusion and customized instruction programmability.
arXiv Detail & Related papers (2025-03-20T21:03:10Z) - Quantizing Large Language Models for Code Generation: A Differentiated Replication [51.85505914274633]
Large Language Models (LLMs) have shown an impressive capability in code generation and, specifically, to automatically implement requirements described in natural language.<n>LLMs pose significant challenges related to their memory (and, consequently, carbon) footprint.<n>New frontier for LLM quantization is 4-bit precision, resulting in an average memory footprint reduction of 70%.
arXiv Detail & Related papers (2025-03-10T09:26:08Z) - PennyLang: Pioneering LLM-Based Quantum Code Generation with a Novel PennyLane-Centric Dataset [4.826802034066811]
Large Language Models (LLMs) offer remarkable capabilities in code generation, natural language processing, and domain-specific reasoning.<n>We introduce a novel, high-quality dataset comprising 3,347 PennyLane-specific quantum code samples and contextual descriptions.
arXiv Detail & Related papers (2025-03-04T11:04:35Z) - Programming Variational Quantum Circuits with Quantum-Train Agent [3.360429911727189]
The Quantum-Train Quantum Fast Weight Programmer (QT-QFWP) framework is proposed, which facilitates the efficient and scalable programming of variational quantum circuits (VQCs)<n>This approach offers a significant advantage over conventional hybrid quantum-classical models by optimizing both quantum and classical parameter management.<n> QT-QFWP outperforms related models in both efficiency and predictive accuracy, providing a pathway toward more practical and cost-effective quantum machine learning applications.
arXiv Detail & Related papers (2024-12-02T06:26:09Z) - QCircuitBench: A Large-Scale Dataset for Benchmarking Quantum Algorithm Design [63.02824918725805]
Quantum computing is recognized for the significant speedup it offers over classical computing through quantum algorithms.<n>QCircuitBench is the first benchmark dataset designed to evaluate AI's capability in designing and implementing quantum algorithms.
arXiv Detail & Related papers (2024-10-10T14:24:30Z) - Performance analysis of a filtering variational quantum algorithm [0.0]
Filtering Variational Quantum Eigensolver (F-VQE) is a variational hybrid quantum algorithm designed to solve optimization problems on existing quantum computers.<n>We employ Instantaneous Quantum Polynomial circuits as our parameterized quantum circuits.<n>Despite some observed positive signs, we conclude that significant development is necessary for a practical advantage with F-VQE.
arXiv Detail & Related papers (2024-04-13T08:50:44Z) - On-Chip Hardware-Aware Quantization for Mixed Precision Neural Networks [52.97107229149988]
We propose an On-Chip Hardware-Aware Quantization framework, performing hardware-aware mixed-precision quantization on deployed edge devices.
For efficiency metrics, we built an On-Chip Quantization Aware pipeline, which allows the quantization process to perceive the actual hardware efficiency of the quantization operator.
For accuracy metrics, we propose Mask-Guided Quantization Estimation technology to effectively estimate the accuracy impact of operators in the on-chip scenario.
arXiv Detail & Related papers (2023-09-05T04:39:34Z) - Synergy Between Quantum Circuits and Tensor Networks: Short-cutting the
Race to Practical Quantum Advantage [43.3054117987806]
We introduce a scalable procedure for harnessing classical computing resources to provide pre-optimized initializations for quantum circuits.
We show this method significantly improves the trainability and performance of PQCs on a variety of problems.
By demonstrating a means of boosting limited quantum resources using classical computers, our approach illustrates the promise of this synergy between quantum and quantum-inspired models in quantum computing.
arXiv Detail & Related papers (2022-08-29T15:24:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.