Realizing a deep reinforcement learning agent discovering real-time
feedback control strategies for a quantum system
- URL: http://arxiv.org/abs/2210.16715v1
- Date: Sun, 30 Oct 2022 01:31:20 GMT
- Title: Realizing a deep reinforcement learning agent discovering real-time
feedback control strategies for a quantum system
- Authors: Kevin Reuer, Jonas Landgraf, Thomas F\"osel, James O'Sullivan, Liberto
Beltr\'an, Abdulkadir Akin, Graham J. Norris, Ants Remm, Michael Kerschbaum,
Jean-Claude Besse, Florian Marquardt, Andreas Wallraff, Christopher Eichler
- Abstract summary: We develop a latency-optimized deep neural network on a field-programmable gate array (FPGA)
We demonstrate its use to efficiently initialize a superconducting qubit into a target state.
We study the agent's performance for strong and weak measurements, and for three-level readout, and compare with simple strategies based on thresholding.
- Score: 3.598535368045164
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To realize the full potential of quantum technologies, finding good
strategies to control quantum information processing devices in real time
becomes increasingly important. Usually these strategies require a precise
understanding of the device itself, which is generally not available.
Model-free reinforcement learning circumvents this need by discovering control
strategies from scratch without relying on an accurate description of the
quantum system. Furthermore, important tasks like state preparation, gate
teleportation and error correction need feedback at time scales much shorter
than the coherence time, which for superconducting circuits is in the
microsecond range. Developing and training a deep reinforcement learning agent
able to operate in this real-time feedback regime has been an open challenge.
Here, we have implemented such an agent in the form of a latency-optimized deep
neural network on a field-programmable gate array (FPGA). We demonstrate its
use to efficiently initialize a superconducting qubit into a target state. To
train the agent, we use model-free reinforcement learning that is based solely
on measurement data. We study the agent's performance for strong and weak
measurements, and for three-level readout, and compare with simple strategies
based on thresholding. This demonstration motivates further research towards
adoption of reinforcement learning for real-time feedback control of quantum
devices and more generally any physical system requiring learnable low-latency
feedback control.
Related papers
- From Easy to Hard: Tackling Quantum Problems with Learned Gadgets For Real Hardware [0.0]
Reinforcement learning has proven to be a powerful approach, but many limitations remain due to the exponential scaling of the space of possible operations on qubits.
We develop an algorithm that automatically learns composite gates ("$gadgets$") and adds them as additional actions to the reinforcement learning agent to facilitate the search.
We show that with GRL we can find very compact PQCs that improve the error in estimating the ground state of TFIM by up to $107$ fold.
arXiv Detail & Related papers (2024-10-31T22:02:32Z) - Controlling nonergodicity in quantum many-body systems by reinforcement learning [0.0]
We develop a model-free and deep-reinforcement learning framework for quantum nonergodicity control.
We use the paradigmatic one-dimensional tilted Fermi-Hubbard system to demonstrate that the DRL agent can efficiently learn the quantum many-body system.
The continuous control protocols and observations are experimentally feasible.
arXiv Detail & Related papers (2024-08-21T20:55:44Z) - Reaction dynamics with qubit-efficient momentum-space mapping [42.408991654684876]
We study quantum algorithms for response functions, relevant for describing different reactions governed by linear response.
We consider a qubit-efficient mapping on a lattice, which can be efficiently performed using momentum-space basis states.
arXiv Detail & Related papers (2024-03-30T00:21:46Z) - ShadowNet for Data-Centric Quantum System Learning [188.683909185536]
We propose a data-centric learning paradigm combining the strength of neural-network protocols and classical shadows.
Capitalizing on the generalization power of neural networks, this paradigm can be trained offline and excel at predicting previously unseen systems.
We present the instantiation of our paradigm in quantum state tomography and direct fidelity estimation tasks and conduct numerical analysis up to 60 qubits.
arXiv Detail & Related papers (2023-08-22T09:11:53Z) - Quantum Control based on Deep Reinforcement Learning [1.8710230264817362]
In this thesis, we consider two simple but typical control problems and apply deep reinforcement learning to them.
We show that reinforcement learning achieves a performance comparable to the optimal control for the quadratic case.
This is the first time deep reinforcement learning is applied to quantum control problems.
arXiv Detail & Related papers (2022-12-14T18:12:26Z) - Self-Correcting Quantum Many-Body Control using Reinforcement Learning
with Tensor Networks [0.0]
We present a novel framework for efficiently controlling quantum many-body systems based on reinforcement learning (RL)
We show that RL agents are capable of finding universal controls, of learning how to optimally steer previously unseen many-body states, and of adapting control protocols on-thefly when the quantum dynamics is subject to perturbations.
arXiv Detail & Related papers (2022-01-27T20:14:09Z) - Quantum Annealing Formulation for Binary Neural Networks [40.99969857118534]
In this work, we explore binary neural networks, which are lightweight yet powerful models typically intended for resource constrained devices.
We devise a quadratic unconstrained binary optimization formulation for the training problem.
While the problem is intractable, i.e., the cost to estimate the binary weights scales exponentially with network size, we show how the problem can be optimized directly on a quantum annealer.
arXiv Detail & Related papers (2021-07-05T03:20:54Z) - On exploring the potential of quantum auto-encoder for learning quantum systems [60.909817434753315]
We devise three effective QAE-based learning protocols to address three classically computational hard learning problems.
Our work sheds new light on developing advanced quantum learning algorithms to accomplish hard quantum physics and quantum information processing tasks.
arXiv Detail & Related papers (2021-06-29T14:01:40Z) - Quantum-tailored machine-learning characterization of a superconducting
qubit [50.591267188664666]
We develop an approach to characterize the dynamics of a quantum device and learn device parameters.
This approach outperforms physics-agnostic recurrent neural networks trained on numerically generated and experimental data.
This demonstration shows how leveraging domain knowledge improves the accuracy and efficiency of this characterization task.
arXiv Detail & Related papers (2021-06-24T15:58:57Z) - Experimental quantum speed-up in reinforcement learning agents [0.17849902073068336]
reinforcement learning (RL) is an important paradigm within artificial intelligence (AI)
We present a RL experiment where the learning of an agent is boosted by utilizing a quantum communication channel with the environment.
We implement this learning protocol on a compact and fully tunable integrated nanophotonic processor.
arXiv Detail & Related papers (2021-03-10T19:01:12Z) - Probing quantum information propagation with out-of-time-ordered
correlators [41.12790913835594]
Small-scale quantum information processors hold the promise to efficiently emulate many-body quantum systems.
Here, we demonstrate the measurement of out-of-time-ordered correlators (OTOCs)
A central requirement for our experiments is the ability to coherently reverse time evolution.
arXiv Detail & Related papers (2021-02-23T15:29:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.