Optimizing Circuit Reusing and its Application in Randomized Benchmarking
- URL: http://arxiv.org/abs/2407.15582v1
- Date: Mon, 22 Jul 2024 12:18:12 GMT
- Title: Optimizing Circuit Reusing and its Application in Randomized Benchmarking
- Authors: Zhuo Chen, Guoding Liu, Xiongfeng Ma,
- Abstract summary: Quantum learning tasks often leverage randomly sampled quantum circuits to characterize unknown systems.
An efficient approach known as "circuit reusing," where each circuit is executed multiple times, reduces the cost compared to implementing new circuits.
This work investigates the optimal reusing parameter that minimizes the variance of measurement outcomes for a given experimental cost.
- Score: 5.783105931700547
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Quantum learning tasks often leverage randomly sampled quantum circuits to characterize unknown systems. An efficient approach known as "circuit reusing," where each circuit is executed multiple times, reduces the cost compared to implementing new circuits. This work investigates the optimal reusing parameter that minimizes the variance of measurement outcomes for a given experimental cost. We establish a theoretical framework connecting the variance of experimental estimators with the reusing parameter R. An optimal R is derived when the implemented circuits and their noise characteristics are known. Additionally, we introduce a near-optimal reusing strategy that is applicable even without prior knowledge of circuits or noise, achieving variances close to the theoretical minimum. To validate our framework, we apply it to randomized benchmarking and analyze the optimal R for various typical noise channels. We further conduct experiments on a superconducting platform, revealing a non-linear relationship between R and the cost, contradicting previous assumptions in the literature. Our theoretical framework successfully incorporates this non-linearity and accurately predicts the experimentally observed optimal R. These findings underscore the broad applicability of our approach to experimental realizations of quantum learning protocols.
Related papers
- Statistical Properties of Robust Satisficing [5.0139295307605325]
The Robust Satisficing (RS) model is an emerging approach to robust optimization.
This paper comprehensively analyzes the theoretical properties of the RS model.
Our experiments show that the RS model consistently outperforms the baseline empirical risk in small-sample regimes.
arXiv Detail & Related papers (2024-05-30T19:57:28Z) - Variance-Reducing Couplings for Random Features: Perspectives from Optimal Transport [57.73648780299374]
Random features (RFs) are a popular technique to scale up kernel methods in machine learning, replacing exact kernel evaluations with Monte Carlo estimates.
We tackle this through the unifying framework of optimal transport, using theoretical insights and numerical algorithms to develop novel, high-performing RF couplings for kernels defined on Euclidean and discrete input spaces.
We reach surprising conclusions about the benefits and limitations of variance reduction as a paradigm.
arXiv Detail & Related papers (2024-05-26T12:25:09Z) - Bayesian Nonparametrics Meets Data-Driven Distributionally Robust Optimization [29.24821214671497]
Training machine learning and statistical models often involve optimizing a data-driven risk criterion.
We propose a novel robust criterion by combining insights from Bayesian nonparametric (i.e., Dirichlet process) theory and a recent decision-theoretic model of smooth ambiguity-averse preferences.
For practical implementation, we propose and study tractable approximations of the criterion based on well-known Dirichlet process representations.
arXiv Detail & Related papers (2024-01-28T21:19:15Z) - Iterative Preference Learning from Human Feedback: Bridging Theory and Practice for RLHF under KL-Constraint [56.74058752955209]
This paper studies the alignment process of generative models with Reinforcement Learning from Human Feedback (RLHF)
We first identify the primary challenges of existing popular methods like offline PPO and offline DPO as lacking in strategical exploration of the environment.
We propose efficient algorithms with finite-sample theoretical guarantees.
arXiv Detail & Related papers (2023-12-18T18:58:42Z) - Estimating Koopman operators with sketching to provably learn large
scale dynamical systems [37.18243295790146]
The theory of Koopman operators allows to deploy non-parametric machine learning algorithms to predict and analyze complex dynamical systems.
We boost the efficiency of different kernel-based Koopman operator estimators using random projections.
We establish non error bounds giving a sharp characterization of the trade-offs between statistical learning rates and computational efficiency.
arXiv Detail & Related papers (2023-06-07T15:30:03Z) - Provable Reward-Agnostic Preference-Based Reinforcement Learning [61.39541986848391]
Preference-based Reinforcement Learning (PbRL) is a paradigm in which an RL agent learns to optimize a task using pair-wise preference-based feedback over trajectories.
We propose a theoretical reward-agnostic PbRL framework where exploratory trajectories that enable accurate learning of hidden reward functions are acquired.
arXiv Detail & Related papers (2023-05-29T15:00:09Z) - Improved Policy Evaluation for Randomized Trials of Algorithmic Resource
Allocation [54.72195809248172]
We present a new estimator leveraging our proposed novel concept, that involves retrospective reshuffling of participants across experimental arms at the end of an RCT.
We prove theoretically that such an estimator is more accurate than common estimators based on sample means.
arXiv Detail & Related papers (2023-02-06T05:17:22Z) - Fast Distributionally Robust Learning with Variance Reduced Min-Max
Optimization [85.84019017587477]
Distributionally robust supervised learning is emerging as a key paradigm for building reliable machine learning systems for real-world applications.
Existing algorithms for solving Wasserstein DRSL involve solving complex subproblems or fail to make use of gradients.
We revisit Wasserstein DRSL through the lens of min-max optimization and derive scalable and efficiently implementable extra-gradient algorithms.
arXiv Detail & Related papers (2021-04-27T16:56:09Z) - Theoretical Modeling of the Iterative Properties of User Discovery in a
Collaborative Filtering Recommender System [0.0]
The closed feedback loop in recommender systems is a common setting that can lead to different types of biases.
We present a theoretical framework to model the evolution of the different components of a recommender system operating within a feedback loop setting.
Our findings lay the theoretical basis for quantifying the effect of feedback loops and for designing Artificial Intelligence and machine learning algorithms.
arXiv Detail & Related papers (2020-08-21T20:30:39Z) - Millimeter Wave Communications with an Intelligent Reflector:
Performance Optimization and Distributional Reinforcement Learning [119.97450366894718]
A novel framework is proposed to optimize the downlink multi-user communication of a millimeter wave base station.
A channel estimation approach is developed to measure the channel state information (CSI) in real-time.
A distributional reinforcement learning (DRL) approach is proposed to learn the optimal IR reflection and maximize the expectation of downlink capacity.
arXiv Detail & Related papers (2020-02-24T22:18:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.