Variational quantum algorithm for enhanced continuous variable optical
phase sensing
- URL: http://arxiv.org/abs/2312.13870v1
- Date: Thu, 21 Dec 2023 14:11:05 GMT
- Title: Variational quantum algorithm for enhanced continuous variable optical
phase sensing
- Authors: Jens A. H. Nielsen, Mateusz Kicinski, Tummas N. Arge, Kannan
Vijayadharan, Jonathan Foldager, Johannes Borregaard, Johannes Jakob Meyer,
Jonas S. Neergaard-Nielsen, Tobias Gehring and Ulrik L. Andersen
- Abstract summary: Variational quantum algorithms (VQAs) are hybrid quantum-classical approaches used for tackling a wide range of problems on noisy quantum devices.
We implement a variational algorithm designed for optimized parameter estimation on a continuous variable platform based on squeezed light.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Variational quantum algorithms (VQAs) are hybrid quantum-classical approaches
used for tackling a wide range of problems on noisy intermediate-scale quantum
(NISQ) devices. Testing these algorithms on relevant hardware is crucial to
investigate the effect of noise and imperfections and to assess their practical
value. Here, we implement a variational algorithm designed for optimized
parameter estimation on a continuous variable platform based on squeezed light,
a key component for high-precision optical phase estimation. We investigate the
ability of the algorithm to identify the optimal metrology process, including
the optimization of the probe state and measurement strategy for small-angle
optical phase sensing. Two different optimization strategies are employed, the
first being a gradient descent optimizer using Gaussian parameter shift rules
to estimate the gradient of the cost function directly from the measurements.
The second strategy involves a gradient-free Bayesian optimizer, fine-tuning
the system using the same cost function and trained on the data acquired
through the gradient-dependent algorithm. We find that both algorithms can
steer the experiment towards the optimal metrology process. However, they find
minima not predicted by our theoretical model, demonstrating the strength of
variational algorithms in modelling complex noise environments, a non-trivial
task.
Related papers
- Random coordinate descent: a simple alternative for optimizing parameterized quantum circuits [4.112419132722306]
This paper introduces a random coordinate descent algorithm as a practical and easy-to-implement alternative to the full gradient descent algorithm.
Motivated by the behavior of measurement noise in the practical optimization of parameterized quantum circuits, this paper presents an optimization problem setting amenable to analysis.
arXiv Detail & Related papers (2023-10-31T18:55:45Z) - Variational quantum algorithm for experimental photonic multiparameter
estimation [0.0]
We develop a variational approach to efficiently optimize a quantum phase sensor operating in a noisy environment.
By exploiting the high reconfigurability of an integrated photonic device, we implement a hybrid quantum-classical feedback loop.
Our experimental results reveal significant improvements in terms of estimation accuracy and noise robustness.
arXiv Detail & Related papers (2023-08-04T18:01:14Z) - Optimal Algorithms for the Inhomogeneous Spiked Wigner Model [89.1371983413931]
We derive an approximate message-passing algorithm (AMP) for the inhomogeneous problem.
We identify in particular the existence of a statistical-to-computational gap where known algorithms require a signal-to-noise ratio bigger than the information-theoretic threshold to perform better than random.
arXiv Detail & Related papers (2023-02-13T19:57:17Z) - Meta-Learning Digitized-Counterdiabatic Quantum Optimization [3.0638256603183054]
We tackle the problem of finding suitable initial parameters for variational optimization by employing a meta-learning technique using recurrent neural networks.
We investigate this technique with the recently proposed digitized-counterdiabatic quantum approximate optimization algorithm (DC-QAOA)
The combination of meta learning and DC-QAOA enables us to find optimal initial parameters for different models, such as MaxCut problem and the Sherrington-Kirkpatrick model.
arXiv Detail & Related papers (2022-06-20T18:57:50Z) - Amortized Implicit Differentiation for Stochastic Bilevel Optimization [53.12363770169761]
We study a class of algorithms for solving bilevel optimization problems in both deterministic and deterministic settings.
We exploit a warm-start strategy to amortize the estimation of the exact gradient.
By using this framework, our analysis shows these algorithms to match the computational complexity of methods that have access to an unbiased estimate of the gradient.
arXiv Detail & Related papers (2021-11-29T15:10:09Z) - Performance comparison of optimization methods on variational quantum
algorithms [2.690135599539986]
Variational quantum algorithms (VQAs) offer a promising path towards using near-term quantum hardware for applications in academic and industrial research.
We study the performance of four commonly used gradient-free optimization methods: SLSQP, COBYLA, CMA-ES, and SPSA.
arXiv Detail & Related papers (2021-11-26T12:13:20Z) - Bilevel Optimization: Convergence Analysis and Enhanced Design [63.64636047748605]
Bilevel optimization is a tool for many machine learning problems.
We propose a novel stoc-efficientgradient estimator named stoc-BiO.
arXiv Detail & Related papers (2020-10-15T18:09:48Z) - Adaptive pruning-based optimization of parameterized quantum circuits [62.997667081978825]
Variisy hybrid quantum-classical algorithms are powerful tools to maximize the use of Noisy Intermediate Scale Quantum devices.
We propose a strategy for such ansatze used in variational quantum algorithms, which we call "Efficient Circuit Training" (PECT)
Instead of optimizing all of the ansatz parameters at once, PECT launches a sequence of variational algorithms.
arXiv Detail & Related papers (2020-10-01T18:14:11Z) - Convergence of adaptive algorithms for weakly convex constrained
optimization [59.36386973876765]
We prove the $mathcaltilde O(t-1/4)$ rate of convergence for the norm of the gradient of Moreau envelope.
Our analysis works with mini-batch size of $1$, constant first and second order moment parameters, and possibly smooth optimization domains.
arXiv Detail & Related papers (2020-06-11T17:43:19Z) - Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization [71.03797261151605]
Adaptivity is an important yet under-studied property in modern optimization theory.
Our algorithm is proved to achieve the best-available convergence for non-PL objectives simultaneously while outperforming existing algorithms for PL objectives.
arXiv Detail & Related papers (2020-02-13T05:42:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.