Greedy versus Map-based Optimized Adaptive Algorithms for
random-telegraph-noise mitigation by spectator qubits
- URL: http://arxiv.org/abs/2205.12566v3
- Date: Wed, 21 Dec 2022 04:58:20 GMT
- Title: Greedy versus Map-based Optimized Adaptive Algorithms for
random-telegraph-noise mitigation by spectator qubits
- Authors: Behnam Tonekaboni, Areeya Chantasri, Hongting Song, Yanan Liu, Howard
M. Wiseman
- Abstract summary: In a scenario where data-storage qubits are kept in isolation as far as possible, noise mitigation can still be done using additional noise probes.
We construct a theoretical model assuming projective measurements on the qubits, and derive the performance of different measurement and control strategies.
We show, analytically and numerically, that MOAAAR outperforms the Greedy algorithm, especially in the regime of high noise sensitivity of SQ.
- Score: 6.305016513788048
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In a scenario where data-storage qubits are kept in isolation as far as
possible, with minimal measurements and controls, noise mitigation can still be
done using additional noise probes, with corrections applied only when needed.
Motivated by the case of solid-state qubits, we consider dephasing noise
arising from a two-state fluctuator, described by random telegraph process, and
a noise probe which is also a qubit, a so-called spectator qubit (SQ). We
construct the theoretical model assuming projective measurements on the SQ, and
derive the performance of different measurement and control strategies in the
regime where the noise mitigation works well. We start with the Greedy
algorithm; that is, the strategy that always maximizes the data qubit coherence
in the immediate future. We show numerically that this algorithm works very
well, and find that its adaptive strategy can be well approximated by a simpler
algorithm with just a few parameters. Based on this, and an analytical
construction using Bayesian maps, we design a one-parameter ($\Theta$) family
of algorithms. In the asymptotic regime of high noise-sensitivity of the SQ, we
show analytically that this $\Theta$-family of algorithms reduces the data
qubit decoherence rate by a divisor scaling as the square of this sensitivity.
Setting $\Theta$ equal to its optimal value, $\Theta^\star$, yields the
Map-based Optimized Adaptive Algorithm for Asymptotic Regime (MOAAAR). We show,
analytically and numerically, that MOAAAR outperforms the Greedy algorithm,
especially in the regime of high noise sensitivity of SQ.
Related papers
- Accelerated zero-order SGD under high-order smoothness and overparameterized regime [79.85163929026146]
We present a novel gradient-free algorithm to solve convex optimization problems.
Such problems are encountered in medicine, physics, and machine learning.
We provide convergence guarantees for the proposed algorithm under both types of noise.
arXiv Detail & Related papers (2024-11-21T10:26:17Z) - Error-mitigated fermionic classical shadows on noisy quantum devices [0.3775283002059579]
Classical shadow (CS) algorithms offer a solution by reducing the number of quantum state copies needed.
We propose an error-mitigated CS algorithm assuming gate-independent, time-stationary, and Markovian (GTM) noise.
Our algorithm efficiently estimates $k$-RDMs with $widetildemathcal O(knk)$ state copies and $widetildemathcal O(sqrtn)$ calibration measurements for GTM noise.
arXiv Detail & Related papers (2023-10-19T13:27:19Z) - Adaptive Strategies in Non-convex Optimization [5.279475826661643]
An algorithm is said to be adaptive to a certain parameter if it does not need a priori knowledge of such a parameter.
This dissertation presents our work on adaptive algorithms in three scenarios.
arXiv Detail & Related papers (2023-06-17T06:52:05Z) - Regret Bounds for Expected Improvement Algorithms in Gaussian Process
Bandit Optimization [63.8557841188626]
The expected improvement (EI) algorithm is one of the most popular strategies for optimization under uncertainty.
We propose a variant of EI with a standard incumbent defined via the GP predictive mean.
We show that our algorithm converges, and achieves a cumulative regret bound of $mathcal O(gamma_TsqrtT)$.
arXiv Detail & Related papers (2022-03-15T13:17:53Z) - Fully Adaptive Bayesian Algorithm for Data Analysis, FABADA [0.0]
This paper describes a novel non-parametric noise reduction technique from the point of view of Bayesian inference.
It iteratively evaluates possible smoothed versions of the data, the smooth models, obtaining an estimation of the underlying signal.
Iterations stop based on the evidence and the $chi2$ statistic of the last smooth model, and we compute the expected value of the signal.
arXiv Detail & Related papers (2022-01-13T18:54:31Z) - Modeling and mitigation of cross-talk effects in readout noise with
applications to the Quantum Approximate Optimization Algorithm [0.0]
Noise mitigation can be performed up to some error for which we derive upper bounds.
Experiments on 15 (23) qubits using IBM's devices to test both the noise model and the error-mitigation scheme.
We show that similar effects are expected for Haar-random quantum states and states generated by shallow-depth random circuits.
arXiv Detail & Related papers (2021-01-07T02:19:58Z) - Qubit-efficient entanglement spectroscopy using qubit resets [0.0]
We develop qubit-efficient quantum algorithms for entanglement spectroscopy on NISQ devices.
Our algorithms use fewer qubits than any previous efficient algorithm while achieving similar performance in the presence of noise.
We also introduce the notion of effective circuit depth as a generalization of standard circuit depth suitable for circuits with qubit resets.
arXiv Detail & Related papers (2020-10-06T23:22:57Z) - Adaptive Sampling for Best Policy Identification in Markov Decision
Processes [79.4957965474334]
We investigate the problem of best-policy identification in discounted Markov Decision (MDPs) when the learner has access to a generative model.
The advantages of state-of-the-art algorithms are discussed and illustrated.
arXiv Detail & Related papers (2020-09-28T15:22:24Z) - Exploiting Higher Order Smoothness in Derivative-free Optimization and
Continuous Bandits [99.70167985955352]
We study the problem of zero-order optimization of a strongly convex function.
We consider a randomized approximation of the projected gradient descent algorithm.
Our results imply that the zero-order algorithm is nearly optimal in terms of sample complexity and the problem parameters.
arXiv Detail & Related papers (2020-06-14T10:42:23Z) - Convergence of adaptive algorithms for weakly convex constrained
optimization [59.36386973876765]
We prove the $mathcaltilde O(t-1/4)$ rate of convergence for the norm of the gradient of Moreau envelope.
Our analysis works with mini-batch size of $1$, constant first and second order moment parameters, and possibly smooth optimization domains.
arXiv Detail & Related papers (2020-06-11T17:43:19Z) - Active Model Estimation in Markov Decision Processes [108.46146218973189]
We study the problem of efficient exploration in order to learn an accurate model of an environment, modeled as a Markov decision process (MDP)
We show that our Markov-based algorithm outperforms both our original algorithm and the maximum entropy algorithm in the small sample regime.
arXiv Detail & Related papers (2020-03-06T16:17:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.