Distribution-Adaptive Dynamic Shot Optimization for Variational Quantum Algorithms
- URL: http://arxiv.org/abs/2412.17485v1
- Date: Mon, 23 Dec 2024 11:28:44 GMT
- Title: Distribution-Adaptive Dynamic Shot Optimization for Variational Quantum Algorithms
- Authors: Youngmin Kim, Enhyeok Jang, Hyungseok Kim, Seungwoo Choi, Changhun Lee, Donghwi Kim, Woomin Kyoung, Kyujin Shin, Won Woo Ro,
- Abstract summary: Variational quantum algorithms (VQAs) have attracted remarkable interest because of their potential computational advantages.
We propose a distribution-adaptive shot (DDS) framework that efficiently adjusts the number of shots per iteration in VQAs.
Our results demonstrate that the framework sustains inference accuracy while achieving a 50% reduction in average shot count.
- Score: 11.357031710307709
- License:
- Abstract: Variational quantum algorithms (VQAs) have attracted remarkable interest over the past few years because of their potential computational advantages on near-term quantum devices. They leverage a hybrid approach that integrates classical and quantum computing resources to solve high-dimensional problems that are challenging for classical approaches alone. In the training process of variational circuits, constructing an accurate probability distribution for each epoch is not always necessary, creating opportunities to reduce computational costs through shot reduction. However, existing shot-allocation methods that capitalize on this potential often lack adaptive feedback or are tied to specific classical optimizers, which limits their applicability to common VQAs and broader optimization techniques. Our observations indicate that the information entropy of a quantum circuit's output distribution exhibits an approximately exponential relationship with the number of shots needed to achieve a target Hellinger distance. In this work, we propose a distribution-adaptive dynamic shot (DDS) framework that efficiently adjusts the number of shots per iteration in VQAs using the entropy distribution from the prior training epoch. Our results demonstrate that the DDS framework sustains inference accuracy while achieving a ~50% reduction in average shot count compared to fixed-shot training, and ~60% higher accuracy than recently proposed tiered shot allocation methods. Furthermore, in noisy simulations that reflect the error rates of actual IBM quantum systems, DDS achieves approximately a ~30% reduction in the total number of shots compared to the fixed-shot method with minimal degradation in accuracy, and offers about ~70% higher computational accuracy than tiered shot allocation methods.
Related papers
- Entanglement Distribution Delay Optimization in Quantum Networks with Distillation [51.53291671169632]
Quantum networks (QNs) distribute entangled states to enable distributed quantum computing and sensing applications.
QS resource allocation framework is proposed to enhance the end-to-end (e2e) fidelity and satisfy minimum rate and fidelity requirements.
arXiv Detail & Related papers (2024-05-15T02:04:22Z) - Artificial-Intelligence-Driven Shot Reduction in Quantum Measurement [6.649102874357367]
Variational Quantum Eigensolver (VQE) provides a powerful solution for approximating molecular ground state energies.
Estimate probabilistic outcomes on quantum hardware requires repeated measurements (shots)
This paper proposes a reinforcement learning based approach that automatically learns shot assignment policies to minimize total measurement shots.
arXiv Detail & Related papers (2024-05-03T21:51:07Z) - SantaQlaus: A resource-efficient method to leverage quantum shot-noise
for optimization of variational quantum algorithms [1.0634978400374293]
We introduce SantaQlaus, a resource-efficient optimization algorithm tailored for variational quantum algorithms (VQAs)
We show that SantaQlaus outperforms existing algorithms in mitigating the risks of converging to poor local optima.
This paves the way for efficient and robust training of quantum variational models.
arXiv Detail & Related papers (2023-12-25T18:58:20Z) - Zero-Shot Sharpness-Aware Quantization for Pre-trained Language Models [88.80146574509195]
Quantization is a promising approach for reducing memory overhead and accelerating inference.
We propose a novel-aware quantization (ZSAQ) framework for the zero-shot quantization of various PLMs.
arXiv Detail & Related papers (2023-10-20T07:09:56Z) - Near-Term Distributed Quantum Computation using Mean-Field Corrections
and Auxiliary Qubits [77.04894470683776]
We propose near-term distributed quantum computing that involve limited information transfer and conservative entanglement production.
We build upon these concepts to produce an approximate circuit-cutting technique for the fragmented pre-training of variational quantum algorithms.
arXiv Detail & Related papers (2023-09-11T18:00:00Z) - Faster variational quantum algorithms with quantum kernel-based
surrogate models [0.0]
We present a new method for small-to-intermediate scale variational algorithms on noisy quantum processors.
Our scheme shifts the computational burden onto the classical component of these hybrid algorithms, greatly reducing the number of queries to the quantum processor.
arXiv Detail & Related papers (2022-11-02T14:11:25Z) - Stochastic Gradient Line Bayesian Optimization: Reducing Measurement
Shots in Optimizing Parameterized Quantum Circuits [4.94950858749529]
We develop an efficient framework for circuit optimization with fewer measurement shots.
We formulate an adaptive measurement-shot strategy to achieve the optimization feasibly without relying on precise expectation-value estimation.
We show that a technique of suffix averaging can significantly reduce the effect of statistical and hardware noise in the optimization for the VQAs.
arXiv Detail & Related papers (2021-11-15T18:00:14Z) - Adaptive shot allocation for fast convergence in variational quantum
algorithms [0.0]
We present a new gradient descent method using an adaptive number of shots at each step, called the global Coupled Adaptive Number of Shots (gCANS) method.
These improvements reduce both the time and money required to run VQAs on current cloud platforms.
arXiv Detail & Related papers (2021-08-23T22:29:44Z) - FasterPose: A Faster Simple Baseline for Human Pose Estimation [65.8413964785972]
We propose a design paradigm for cost-effective network with LR representation for efficient pose estimation, named FasterPose.
We study the training behavior of FasterPose, and formulate a novel regressive cross-entropy (RCE) loss function for accelerating the convergence.
Compared with the previously dominant network of pose estimation, our method reduces 58% of the FLOPs and simultaneously gains 1.3% improvement of accuracy.
arXiv Detail & Related papers (2021-07-07T13:39:08Z) - DAQ: Distribution-Aware Quantization for Deep Image Super-Resolution
Networks [49.191062785007006]
Quantizing deep convolutional neural networks for image super-resolution substantially reduces their computational costs.
Existing works either suffer from a severe performance drop in ultra-low precision of 4 or lower bit-widths, or require a heavy fine-tuning process to recover the performance.
We propose a novel distribution-aware quantization scheme (DAQ) which facilitates accurate training-free quantization in ultra-low precision.
arXiv Detail & Related papers (2020-12-21T10:19:42Z) - Fully Quantized Image Super-Resolution Networks [81.75002888152159]
We propose a Fully Quantized image Super-Resolution framework (FQSR) to jointly optimize efficiency and accuracy.
We apply our quantization scheme on multiple mainstream super-resolution architectures, including SRResNet, SRGAN and EDSR.
Our FQSR using low bits quantization can achieve on par performance compared with the full-precision counterparts on five benchmark datasets.
arXiv Detail & Related papers (2020-11-29T03:53:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.