Learning feedback control strategies for quantum metrology
- URL: http://arxiv.org/abs/2110.15080v2
- Date: Mon, 18 Apr 2022 08:37:26 GMT
- Title: Learning feedback control strategies for quantum metrology
- Authors: Alessio Fallani, Matteo A. C. Rossi, Dario Tamascelli, Marco G. Genoni
- Abstract summary: We exploit reinforcement learning techniques to devise feedback control strategies achieving increased estimation precision.
We show that the feedback control determined by the neural network greatly surpasses in the long-time limit the performances of both the "no-control" strategy and the standard "open-loop control" strategy.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We consider the problem of frequency estimation for a single bosonic field
evolving under a squeezing Hamiltonian and continuously monitored via homodyne
detection. In particular, we exploit reinforcement learning techniques to
devise feedback control strategies achieving increased estimation precision. We
show that the feedback control determined by the neural network greatly
surpasses in the long-time limit the performances of both the "no-control"
strategy and the standard "open-loop control" strategy, which we considered as
benchmarks. We indeed observe how the devised strategy is able to optimize the
nontrivial estimation problem by preparing a large fraction of trajectories
corresponding to more sensitive quantum conditional states.
Related papers
- Bounding fidelity in quantum feedback control: Theory and applications to Dicke state preparation [0.0]
We derive an ultimate bound on the steady-state average fidelity achievable via continuous monitoring and feedback control.
We then focus on preparing Dicke states in an atomic ensemble subject to collective damping and dispersive coupling.
arXiv Detail & Related papers (2025-03-24T21:09:37Z) - FOCQS: Feedback Optimally Controlled Quantum States [0.0]
Feedback-based quantum algorithms, such as FALQON, avoid fine-tuning problems but at the cost of additional circuit depth and a lack of convergence guarantees.
We develop an analytic framework to use it to perturbatively update previous control layers.
This perturbative methodology, which we call Feedback Optimally Controlled Quantum States (FOCQS), can be used to improve the results of feedback-based algorithms.
arXiv Detail & Related papers (2024-09-23T18:00:06Z) - Growing Q-Networks: Solving Continuous Control Tasks with Adaptive Control Resolution [51.83951489847344]
In robotics applications, smooth control signals are commonly preferred to reduce system wear and energy efficiency.
In this work, we aim to bridge this performance gap by growing discrete action spaces from coarse to fine control resolution.
Our work indicates that an adaptive control resolution in combination with value decomposition yields simple critic-only algorithms that yield surprisingly strong performance on continuous control tasks.
arXiv Detail & Related papers (2024-04-05T17:58:37Z) - Robust Quantum Control via a Model Predictive Control Strategy [4.197316670989004]
This article presents a robust control strategy for a two-level quantum system subject to bounded uncertainties.
We present theoretical results to guarantee the stability of the TOMPC algorithm.
Numerical simulations demonstrate that, in the presence of uncertainties, our quantum TOMPC algorithm enhances the robustness and steers the state to the desired state with high fidelity.
arXiv Detail & Related papers (2024-02-12T04:05:54Z) - The Quantum Cartpole: A benchmark environment for non-linear
reinforcement learning [0.0]
We show how a trade-off between state estimation and controllability arises.
We demonstrate the feasibility of using transfer learning to develop a quantum control agent trained via reinforcement learning.
arXiv Detail & Related papers (2023-11-01T18:02:42Z) - No-Collapse Accurate Quantum Feedback Control via Conditional State
Tomography [0.0]
The effectiveness of measurement-based feedback control (MBFC) protocols is hampered by the presence of measurement noise.
This work explores a real-time continuous state estimation approach that enables noise-free monitoring of the conditional dynamics.
This approach is particularly useful for reinforcement-learning (RL)-based control, where the RL-agent can be trained with arbitrary conditional averages of observables.
arXiv Detail & Related papers (2023-01-18T01:28:23Z) - Gradient Ascent Pulse Engineering with Feedback [0.0]
We introduce feedback-GRAPE, which borrows some concepts from model-free reinforcement learning to incorporate the response to strong measurements.
Our method yields interpretable feedback strategies for state preparation and stabilization in the presence of noise.
arXiv Detail & Related papers (2022-03-08T18:46:09Z) - Surveillance Evasion Through Bayesian Reinforcement Learning [78.79938727251594]
We consider a 2D continuous path planning problem with a completely unknown intensity of random termination.
Those Observers' surveillance intensity is a priori unknown and has to be learned through repetitive path planning.
arXiv Detail & Related papers (2021-09-30T02:29:21Z) - Balancing detectability and performance of attacks on the control
channel of Markov Decision Processes [77.66954176188426]
We investigate the problem of designing optimal stealthy poisoning attacks on the control channel of Markov decision processes (MDPs)
This research is motivated by the recent interest of the research community for adversarial and poisoning attacks applied to MDPs, and reinforcement learning (RL) methods.
arXiv Detail & Related papers (2021-09-15T09:13:10Z) - Closing the Closed-Loop Distribution Shift in Safe Imitation Learning [80.05727171757454]
We treat safe optimization-based control strategies as experts in an imitation learning problem.
We train a learned policy that can be cheaply evaluated at run-time and that provably satisfies the same safety guarantees as the expert.
arXiv Detail & Related papers (2021-02-18T05:11:41Z) - Point-Level Temporal Action Localization: Bridging Fully-supervised
Proposals to Weakly-supervised Losses [84.2964408497058]
Point-level temporal action localization (PTAL) aims to localize actions in untrimmed videos with only one timestamp annotation for each action instance.
Existing methods adopt the frame-level prediction paradigm to learn from the sparse single-frame labels.
This paper attempts to explore the proposal-based prediction paradigm for point-level annotations.
arXiv Detail & Related papers (2020-12-15T12:11:48Z) - Reinforcement Learning for Low-Thrust Trajectory Design of
Interplanetary Missions [77.34726150561087]
This paper investigates the use of reinforcement learning for the robust design of interplanetary trajectories in presence of severe disturbances.
An open-source implementation of the state-of-the-art algorithm Proximal Policy Optimization is adopted.
The resulting Guidance and Control Network provides both a robust nominal trajectory and the associated closed-loop guidance law.
arXiv Detail & Related papers (2020-08-19T15:22:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.