Optimization Framework for Reducing Mid-circuit Measurements and Resets
- URL: http://arxiv.org/abs/2504.16579v1
- Date: Wed, 23 Apr 2025 10:01:00 GMT
- Title: Optimization Framework for Reducing Mid-circuit Measurements and Resets
- Authors: Yanbin Chen, Innocenzo Fulginiti, Christian B. Mendl,
- Abstract summary: We implement an optimization framework that targets both mid-circuit measurements and resets.<n>We evaluate our framework using a large dataset of randomly generated dynamic circuits.
- Score: 0.13108652488669736
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The paper addresses the optimization of dynamic circuits in quantum computing, with a focus on reducing the cost of mid-circuit measurements and resets. We extend the probabilistic circuit model (PCM) and implement an optimization framework that targets both mid-circuit measurements and resets. To overcome the limitation of the prior PCM-based pass, where optimizations are only possible on pure single-qubit states, we incorporate circuit synthesis to enable optimizations on multi-qubit states. With a parameter $n_{pcm}$, our framework balances optimization level against resource usage.We evaluate our framework using a large dataset of randomly generated dynamic circuits. Experimental results demonstrate that our method is highly effective in reducing mid-circuit measurements and resets. In our demonstrative example, when applying our optimization framework to the Bernstein-Vazirani algorithm after employing qubit reuse, we significantly reduce its runtime overhead by removing all of the resets.
Related papers
- Measurement-Guided State Refinement for Shallow Feedback-Based Quantum Optimization Algorithm [0.0]
Limited circuit depth remains a central constraint for quantum optimization in noisy quantum regimes.<n>We introduce Measurement-Guided Initialization (MGI), an iterative strategy that uses measurement outcomes from previous executions.<n>We show that MGI improves the performance of shallow-depth circuits and enables iterative refinement toward high-quality solutions.
arXiv Detail & Related papers (2026-02-23T23:07:11Z) - Quantum Circuit Generation via test-time learning with large language models [0.0]
Large language models (LLMs) can generate structured artifacts, but using them as dependables for scientific design requires a mechanism for iterative improvement under black-box evaluation.<n>Here, we cast quantum circuit synthesis as a closed-loop, test-time optimization problem: an LLM proposes edits to a fixed-length gate list, and an external simulator evaluates the resulting state with the Meyer-Wallach (MW) global entanglement measure.<n>We introduce a lightweight test-time learning recipe that can reuse prior high-performing candidates as an explicit memory trace, augments prompts with a score-difference feedback, and applies restart-from-
arXiv Detail & Related papers (2026-02-03T12:41:25Z) - Blockwise Optimization for Projective Variational Quantum Dynamics (BLOP-VQD): Algorithm and Implementation for Lattice Systems [0.0]
We present an efficient approach to simulate real-time quantum dynamics using Projected Variational Quantum Dynamics.<n>Our method selectively optimize one block at a time while keeping the others fixed, allowing for significant reductions in computational overhead.<n>We demonstrate the performance of the proposed methods in a series of spin-lattice models with varying sizes and complexity.
arXiv Detail & Related papers (2025-03-24T01:48:37Z) - Transformer-based Model Predictive Control: Trajectory Optimization via Sequence Modeling [16.112708478263745]
We present a unified framework combine the main strengths of optimization-based methods for learning.
Our approach entails embedding high-capacity, transformer-based neural network models within optimization process.
Compared to purely optimization-based approaches, results show that our approach can improve performance by up to 75%.
arXiv Detail & Related papers (2024-10-31T13:23:10Z) - Bypass Back-propagation: Optimization-based Structural Pruning for Large Language Models via Policy Gradient [57.9629676017527]
We propose an optimization-based structural pruning that learns the pruning masks in a probabilistic space directly by optimizing the loss of the pruned model.<n>We achieve this by learning an underlying Bernoulli distribution to sample binary pruning masks.<n>Experiments conducted on LLaMA, LLaMA-2, LLaMA-3, Vicuna, and Mistral models demonstrate the promising performance of our method in efficiency and effectiveness.
arXiv Detail & Related papers (2024-06-15T09:31:03Z) - Provably Mitigating Overoptimization in RLHF: Your SFT Loss is Implicitly an Adversarial Regularizer [52.09480867526656]
We identify the source of misalignment as a form of distributional shift and uncertainty in learning human preferences.<n>To mitigate overoptimization, we first propose a theoretical algorithm that chooses the best policy for an adversarially chosen reward model.<n>Using the equivalence between reward models and the corresponding optimal policy, the algorithm features a simple objective that combines a preference optimization loss and a supervised learning loss.
arXiv Detail & Related papers (2024-05-26T05:38:50Z) - Analyzing and Enhancing the Backward-Pass Convergence of Unrolled
Optimization [50.38518771642365]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
A central challenge in this setting is backpropagation through the solution of an optimization problem, which often lacks a closed form.
This paper provides theoretical insights into the backward pass of unrolled optimization, showing that it is equivalent to the solution of a linear system by a particular iterative method.
A system called Folded Optimization is proposed to construct more efficient backpropagation rules from unrolled solver implementations.
arXiv Detail & Related papers (2023-12-28T23:15:18Z) - Graph Neural Network Autoencoders for Efficient Quantum Circuit
Optimisation [69.43216268165402]
We present for the first time how to use graph neural network (GNN) autoencoders for the optimisation of quantum circuits.
We construct directed acyclic graphs from the quantum circuits, encode the graphs and use the encodings to represent RL states.
Our method is the first realistic first step towards very large scale RL quantum circuit optimisation.
arXiv Detail & Related papers (2023-03-06T16:51:30Z) - Maximum-Likelihood Inverse Reinforcement Learning with Finite-Time
Guarantees [56.848265937921354]
Inverse reinforcement learning (IRL) aims to recover the reward function and the associated optimal policy.
Many algorithms for IRL have an inherently nested structure.
We develop a novel single-loop algorithm for IRL that does not compromise reward estimation accuracy.
arXiv Detail & Related papers (2022-10-04T17:13:45Z) - Stochastic Gradient Line Bayesian Optimization: Reducing Measurement
Shots in Optimizing Parameterized Quantum Circuits [4.94950858749529]
We develop an efficient framework for circuit optimization with fewer measurement shots.
We formulate an adaptive measurement-shot strategy to achieve the optimization feasibly without relying on precise expectation-value estimation.
We show that a technique of suffix averaging can significantly reduce the effect of statistical and hardware noise in the optimization for the VQAs.
arXiv Detail & Related papers (2021-11-15T18:00:14Z) - An Efficient Batch Constrained Bayesian Optimization Approach for Analog
Circuit Synthesis via Multi-objective Acquisition Ensemble [11.64233949999656]
We propose an efficient parallelizable Bayesian optimization algorithm via Multi-objective ACquisition function Ensemble (MACE)
Our proposed algorithm can reduce the overall simulation time by up to 74 times compared to differential evolution (DE) for the unconstrained optimization problem when the batch size is 15.
For the constrained optimization problem, our proposed algorithm can speed up the optimization process by up to 15 times compared to the weighted expected improvement based Bayesian optimization (WEIBO) approach, when the batch size is 15.
arXiv Detail & Related papers (2021-06-28T13:21:28Z) - Efficient Micro-Structured Weight Unification and Pruning for Neural
Network Compression [56.83861738731913]
Deep Neural Network (DNN) models are essential for practical applications, especially for resource limited devices.
Previous unstructured or structured weight pruning methods can hardly truly accelerate inference.
We propose a generalized weight unification framework at a hardware compatible micro-structured level to achieve high amount of compression and acceleration.
arXiv Detail & Related papers (2021-06-15T17:22:59Z) - Adaptive pruning-based optimization of parameterized quantum circuits [62.997667081978825]
Variisy hybrid quantum-classical algorithms are powerful tools to maximize the use of Noisy Intermediate Scale Quantum devices.
We propose a strategy for such ansatze used in variational quantum algorithms, which we call "Efficient Circuit Training" (PECT)
Instead of optimizing all of the ansatz parameters at once, PECT launches a sequence of variational algorithms.
arXiv Detail & Related papers (2020-10-01T18:14:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.