A generic and robust quantum agent inspired by deep meta-reinforcement learning
- URL: http://arxiv.org/abs/2406.07225v1
- Date: Tue, 11 Jun 2024 13:04:30 GMT
- Title: A generic and robust quantum agent inspired by deep meta-reinforcement learning
- Authors: Zibo Miao, Shihui Zhang, Yu Pan, Sibo Tao, Yu Chen,
- Abstract summary: We develop a new training algorithm inspired by the deep meta-reinforcement learning (deep meta-RL)
The trained neural network is adaptive and robust.
Our algorithm can also automatically adjust the number of pulses required to generate the target gate.
- Score: 4.881040823544883
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep reinforcement learning (deep RL) has enabled human- or superhuman- performances in various applications. Recently, deep RL has also been adopted to improve the performance of quantum control. However, a large volume of data is typically required to train the neural network in deep RL, making it inefficient compared with the traditional optimal quantum control method. Here, we thus develop a new training algorithm inspired by the deep meta-reinforcement learning (deep meta-RL), which requires significantly less training data. The trained neural network is adaptive and robust. In addition, the algorithm proposed by us has been applied to design the Hadamard gate and show that for a wide range of parameters the infidelity of the obtained gate can be made of the order 0.0001. Our algorithm can also automatically adjust the number of pulses required to generate the target gate, which is different from the traditional optimal quantum control method which typically fixes the number of pulses a-priory. The results of this paper can pave the way towards constructing a universally robust quantum agent catering to the different demands in quantum technologies.
Related papers
- Traversing Quantum Control Robustness Landscapes: A New Paradigm for Quantum Gate Engineering [0.0]
We introduce the Quantum Control Robustness Landscape (QCRL), a conceptual framework that maps control parameters to noise susceptibility.
By navigating through the level sets of the QCRL, our Robustness-Invariant Pulse Variation (RIPV) algorithm allows for the variation of control pulses while preserving robustness.
Numerical simulations demonstrate that our single- and two-qubit gates exceed the quantum error correction threshold even with substantial noise.
arXiv Detail & Related papers (2024-12-27T05:56:38Z) - Efficient Learning for Linear Properties of Bounded-Gate Quantum Circuits [63.733312560668274]
Given a quantum circuit containing d tunable RZ gates and G-d Clifford gates, can a learner perform purely classical inference to efficiently predict its linear properties?
We prove that the sample complexity scaling linearly in d is necessary and sufficient to achieve a small prediction error, while the corresponding computational complexity may scale exponentially in d.
We devise a kernel-based learning model capable of trading off prediction error and computational complexity, transitioning from exponential to scaling in many practical settings.
arXiv Detail & Related papers (2024-08-22T08:21:28Z) - Machine-learning-inspired quantum optimal control of nonadiabatic
geometric quantum computation via reverse engineering [3.3216171033358077]
We propose a promising average-fidelity-based machine-learning-inspired method to optimize the control parameters.
We implement a single-qubit gate by cat-state nonadiabatic geometric quantum computation via reverse engineering.
We demonstrate that the neural network possesses the ability to expand the model space.
arXiv Detail & Related papers (2023-09-28T14:36:26Z) - Optimal quantum control via genetic algorithms for quantum state
engineering in driven-resonator mediated networks [68.8204255655161]
We employ a machine learning-enabled approach to quantum state engineering based on evolutionary algorithms.
We consider a network of qubits -- encoded in the states of artificial atoms with no direct coupling -- interacting via a common single-mode driven microwave resonator.
We observe high quantum fidelities and resilience to noise, despite the algorithm being trained in the ideal noise-free setting.
arXiv Detail & Related papers (2022-06-29T14:34:00Z) - Optimizing Tensor Network Contraction Using Reinforcement Learning [86.05566365115729]
We propose a Reinforcement Learning (RL) approach combined with Graph Neural Networks (GNN) to address the contraction ordering problem.
The problem is extremely challenging due to the huge search space, the heavy-tailed reward distribution, and the challenging credit assignment.
We show how a carefully implemented RL-agent that uses a GNN as the basic policy construct can address these challenges.
arXiv Detail & Related papers (2022-04-18T21:45:13Z) - Self-Correcting Quantum Many-Body Control using Reinforcement Learning
with Tensor Networks [0.0]
We present a novel framework for efficiently controlling quantum many-body systems based on reinforcement learning (RL)
We show that RL agents are capable of finding universal controls, of learning how to optimally steer previously unseen many-body states, and of adapting control protocols on-thefly when the quantum dynamics is subject to perturbations.
arXiv Detail & Related papers (2022-01-27T20:14:09Z) - Quantum Architecture Search via Continual Reinforcement Learning [0.0]
This paper proposes a machine learning-based method to construct quantum circuit architectures.
We present the Probabilistic Policy Reuse with deep Q-learning (PPR-DQL) framework to tackle this circuit design challenge.
arXiv Detail & Related papers (2021-12-10T19:07:56Z) - A quantum algorithm for training wide and deep classical neural networks [72.2614468437919]
We show that conditions amenable to classical trainability via gradient descent coincide with those necessary for efficiently solving quantum linear systems.
We numerically demonstrate that the MNIST image dataset satisfies such conditions.
We provide empirical evidence for $O(log n)$ training of a convolutional neural network with pooling.
arXiv Detail & Related papers (2021-07-19T23:41:03Z) - The Hintons in your Neural Network: a Quantum Field Theory View of Deep
Learning [84.33745072274942]
We show how to represent linear and non-linear layers as unitary quantum gates, and interpret the fundamental excitations of the quantum model as particles.
On top of opening a new perspective and techniques for studying neural networks, the quantum formulation is well suited for optical quantum computing.
arXiv Detail & Related papers (2021-03-08T17:24:29Z) - Chance-Constrained Control with Lexicographic Deep Reinforcement
Learning [77.34726150561087]
This paper proposes a lexicographic Deep Reinforcement Learning (DeepRL)-based approach to chance-constrained Markov Decision Processes.
A lexicographic version of the well-known DeepRL algorithm DQN is also proposed and validated via simulations.
arXiv Detail & Related papers (2020-10-19T13:09:14Z) - Automatic low-bit hybrid quantization of neural networks through meta
learning [22.81983466720024]
We employ the meta learning method to automatically realize low-bit hybrid quantization of neural networks.
A MetaQuantNet, together with a Quantization function, are trained to generate the quantized weights for the target DNN.
With the best searched quantization policy, we subsequently retrain or finetune to further improve the performance of the quantized target network.
arXiv Detail & Related papers (2020-04-24T02:01:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.