Stochastic optimization for learning quantum state feedback control
- URL: http://arxiv.org/abs/2111.09896v1
- Date: Thu, 18 Nov 2021 19:00:06 GMT
- Title: Stochastic optimization for learning quantum state feedback control
- Authors: Ethan N. Evans, Ziyi Wang, Adam G. Frim, Michael R. DeWeese, Evangelos
A. Theodorou
- Abstract summary: We present a general framework for training deep feedback networks for open quantum systems with quantum nondemolition measurement.
We demonstrate that this method is efficient due to inherent parallelizability, robust to open system interactions, and outperforms landmark state feedback control results in simulation.
- Score: 16.4432244108711
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: High fidelity state preparation represents a fundamental challenge in the
application of quantum technology. While the majority of optimal control
approaches use feedback to improve the controller, the controller itself often
does not incorporate explicit state dependence. Here, we present a general
framework for training deep feedback networks for open quantum systems with
quantum nondemolition measurement that allows a variety of system and control
structures that are prohibitive by many other techniques and can in effect
react to unmodeled effects through nonlinear filtering. We demonstrate that
this method is efficient due to inherent parallelizability, robust to open
system interactions, and outperforms landmark state feedback control results in
simulation.
Related papers
- FOCQS: Feedback Optimally Controlled Quantum States [0.0]
Feedback-based quantum algorithms, such as FALQON, avoid fine-tuning problems but at the cost of additional circuit depth and a lack of convergence guarantees.
We develop an analytic framework to use it to perturbatively update previous control layers.
This perturbative methodology, which we call Feedback Optimally Controlled Quantum States (FOCQS), can be used to improve the results of feedback-based algorithms.
arXiv Detail & Related papers (2024-09-23T18:00:06Z) - Growing Q-Networks: Solving Continuous Control Tasks with Adaptive Control Resolution [51.83951489847344]
In robotics applications, smooth control signals are commonly preferred to reduce system wear and energy efficiency.
In this work, we aim to bridge this performance gap by growing discrete action spaces from coarse to fine control resolution.
Our work indicates that an adaptive control resolution in combination with value decomposition yields simple critic-only algorithms that yield surprisingly strong performance on continuous control tasks.
arXiv Detail & Related papers (2024-04-05T17:58:37Z) - Quantum control by the environment: Turing uncomputability, Optimization over Stiefel manifolds, Reachable sets, and Incoherent GRAPE [56.47577824219207]
In many practical situations, the controlled quantum systems are open, interacting with the environment.
In this note, we briefly review some results on control of open quantum systems using environment as a resource.
arXiv Detail & Related papers (2024-03-20T10:09:13Z) - The Quantum Cartpole: A benchmark environment for non-linear
reinforcement learning [0.0]
We show how a trade-off between state estimation and controllability arises.
We demonstrate the feasibility of using transfer learning to develop a quantum control agent trained via reinforcement learning.
arXiv Detail & Related papers (2023-11-01T18:02:42Z) - Robust optimization for quantum reinforcement learning control using
partial observations [10.975734427172231]
Full observation of quantum state is experimentally infeasible due to the exponential scaling of the number of required quantum measurements on the number of qubits.
This control scheme is compatible with near-term quantum devices, where the noise is prevalent.
It has been shown that high-fidelity state control can be achieved even if the noise amplitude is at the same level as the control amplitude.
arXiv Detail & Related papers (2022-06-29T06:30:35Z) - Sparsity in Partially Controllable Linear Systems [56.142264865866636]
We study partially controllable linear dynamical systems specified by an underlying sparsity pattern.
Our results characterize those state variables which are irrelevant for optimal control.
arXiv Detail & Related papers (2021-10-12T16:41:47Z) - Model-Free Quantum Control with Reinforcement Learning [0.0]
We propose a circuit-based approach for training a reinforcement learning agent on quantum control tasks in a model-free way.
We show how to reward the learning agent using measurements of experimentally available observables.
This approach significantly outperforms widely used model-free methods in terms of sample efficiency.
arXiv Detail & Related papers (2021-04-29T17:53:26Z) - Closing the Closed-Loop Distribution Shift in Safe Imitation Learning [80.05727171757454]
We treat safe optimization-based control strategies as experts in an imitation learning problem.
We train a learned policy that can be cheaply evaluated at run-time and that provably satisfies the same safety guarantees as the expert.
arXiv Detail & Related papers (2021-02-18T05:11:41Z) - Enforcing robust control guarantees within neural network policies [76.00287474159973]
We propose a generic nonlinear control policy class, parameterized by neural networks, that enforces the same provable robustness criteria as robust control.
We demonstrate the power of this approach on several domains, improving in average-case performance over existing robust control methods and in worst-case stability over (non-robust) deep RL methods.
arXiv Detail & Related papers (2020-11-16T17:14:59Z) - Gaussian Process-based Min-norm Stabilizing Controller for
Control-Affine Systems with Uncertain Input Effects and Dynamics [90.81186513537777]
We propose a novel compound kernel that captures the control-affine nature of the problem.
We show that this resulting optimization problem is convex, and we call it Gaussian Process-based Control Lyapunov Function Second-Order Cone Program (GP-CLF-SOCP)
arXiv Detail & Related papers (2020-11-14T01:27:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.