Improving robustness of quantum feedback control with reinforcement
learning
- URL: http://arxiv.org/abs/2401.17190v1
- Date: Tue, 30 Jan 2024 17:20:37 GMT
- Title: Improving robustness of quantum feedback control with reinforcement
learning
- Authors: Manuel Guatto, Gian Antonio Susto, Francesco Ticozzi
- Abstract summary: Reinforcement learning approaches are used to derive a feedback law for state preparation of a desired state in a target system.
We focus on the robustness of the obtained strategies with respect to different types and amount of noise.
The possibility of effective off-line training of robust controllers promises significant advantages towards practical implementation.
- Score: 5.236286830498895
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Obtaining reliable state preparation protocols is a key step towards
practical implementation of many quantum technologies, and one of the main
tasks in quantum control. In this work, different reinforcement learning
approaches are used to derive a feedback law for state preparation of a desired
state in a target system. In particular, we focus on the robustness of the
obtained strategies with respect to different types and amount of noise.
Comparing the results indicates that the learned controls are more robust to
unmodeled perturbations with respect to simple feedback strategy based on
optimized population transfer, and that training on simulated nominal model
retain the same advantages displayed by controllers trained on real data. The
possibility of effective off-line training of robust controllers promises
significant advantages towards practical implementation.
Related papers
- Unlearning with Control: Assessing Real-world Utility for Large Language Model Unlearning [97.2995389188179]
Recent research has begun to approach large language models (LLMs) unlearning via gradient ascent (GA)
Despite their simplicity and efficiency, we suggest that GA-based methods face the propensity towards excessive unlearning.
We propose several controlling methods that can regulate the extent of excessive unlearning.
arXiv Detail & Related papers (2024-06-13T14:41:00Z) - Conformal Policy Learning for Sensorimotor Control Under Distribution
Shifts [61.929388479847525]
This paper focuses on the problem of detecting and reacting to changes in the distribution of a sensorimotor controller's observables.
The key idea is the design of switching policies that can take conformal quantiles as input.
We show how to design such policies by using conformal quantiles to switch between base policies with different characteristics.
arXiv Detail & Related papers (2023-11-02T17:59:30Z) - Model predictive control-based value estimation for efficient reinforcement learning [6.8237783245324035]
We design an improved reinforcement learning method based on model predictive control that models the environment through a data-driven approach.
Based on the learned environment model, it performs multi-step prediction to estimate the value function and optimize the policy.
The method demonstrates higher learning efficiency, faster convergent speed of strategies tending to the local optimal value, and less sample capacity space required by experience replay buffers.
arXiv Detail & Related papers (2023-10-25T13:55:14Z) - Model-based adaptation for sample efficient transfer in reinforcement
learning control of parameter-varying systems [1.8799681615947088]
We leverage ideas from model-based control to address the sample efficiency problem of reinforcement learning algorithms.
We demonstrate that our approach is more sample-efficient than fine-tuning with reinforcement learning alone.
arXiv Detail & Related papers (2023-05-20T10:11:09Z) - Improving the Performance of Robust Control through Event-Triggered
Learning [74.57758188038375]
We propose an event-triggered learning algorithm that decides when to learn in the face of uncertainty in the LQR problem.
We demonstrate improved performance over a robust controller baseline in a numerical example.
arXiv Detail & Related papers (2022-07-28T17:36:37Z) - Stochastic optimization for learning quantum state feedback control [16.4432244108711]
We present a general framework for training deep feedback networks for open quantum systems with quantum nondemolition measurement.
We demonstrate that this method is efficient due to inherent parallelizability, robust to open system interactions, and outperforms landmark state feedback control results in simulation.
arXiv Detail & Related papers (2021-11-18T19:00:06Z) - Adaptive control of a mechatronic system using constrained residual
reinforcement learning [0.0]
We propose a simple, practical and intuitive approach to improve the performance of a conventional controller in uncertain environments.
Our approach is motivated by the observation that conventional controllers in industrial motion control value robustness over adaptivity to deal with different operating conditions.
arXiv Detail & Related papers (2021-10-06T08:13:05Z) - Reinforcement learning-enhanced protocols for coherent
population-transfer in three-level quantum systems [50.591267188664666]
We deploy a combination of reinforcement learning-based approaches and more traditional optimization techniques to identify optimal protocols for population transfer.
Our approach is able to explore the space of possible control protocols to reveal the existence of efficient protocols.
The new protocols that we identify are robust against both energy losses and dephasing.
arXiv Detail & Related papers (2021-09-02T14:17:30Z) - Model-Free Quantum Control with Reinforcement Learning [0.0]
We propose a circuit-based approach for training a reinforcement learning agent on quantum control tasks in a model-free way.
We show how to reward the learning agent using measurements of experimentally available observables.
This approach significantly outperforms widely used model-free methods in terms of sample efficiency.
arXiv Detail & Related papers (2021-04-29T17:53:26Z) - Closing the Closed-Loop Distribution Shift in Safe Imitation Learning [80.05727171757454]
We treat safe optimization-based control strategies as experts in an imitation learning problem.
We train a learned policy that can be cheaply evaluated at run-time and that provably satisfies the same safety guarantees as the expert.
arXiv Detail & Related papers (2021-02-18T05:11:41Z) - Efficient Empowerment Estimation for Unsupervised Stabilization [75.32013242448151]
empowerment principle enables unsupervised stabilization of dynamical systems at upright positions.
We propose an alternative solution based on a trainable representation of a dynamical system as a Gaussian channel.
We show that our method has a lower sample complexity, is more stable in training, possesses the essential properties of the empowerment function, and allows estimation of empowerment from images.
arXiv Detail & Related papers (2020-07-14T21:10:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.