Machine Learning based Discrimination for Excited State Promoted Readout
- URL: http://arxiv.org/abs/2210.08574v1
- Date: Sun, 16 Oct 2022 16:09:46 GMT
- Title: Machine Learning based Discrimination for Excited State Promoted Readout
- Authors: Utkarsh Azad and Helena Zhang
- Abstract summary: A technique known as excited state promoted (ESP) readout was proposed to reduce this effect.
In this work, we use readout data from five-qubit IBMQ devices to measure the effectiveness of using deep neural networks.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: A limiting factor for readout fidelity for superconducting qubits is the
relaxation of the qubit to the ground state before the time needed for the
resonator to reach its final target state. A technique known as excited state
promoted (ESP) readout was proposed to reduce this effect and further improve
the readout contrast on superconducting hardware. In this work, we use readout
data from five-qubit IBMQ devices to measure the effectiveness of using deep
neural networks, like feedforward neural networks, and various classification
algorithms, like k-nearest neighbors, decision trees, and Gaussian naive Bayes,
for single-qubit and multi-qubit discrimination. These methods were compared to
standardly used linear and quadratic discriminant analysis algorithms based on
their qubit-state-assignment fidelity performance, robustness to readout
crosstalk, and training time.
Related papers
- Neural network based time-resolved state tomography of superconducting qubits [9.775471166288503]
We introduce a time-resolved neural network capable of full-state tomography for individual qubits.
This scalable approach, with a dedicated module per qubit, mitigated readout error by an order of magnitude under low signal-to-noise ratios.
arXiv Detail & Related papers (2023-12-13T08:09:12Z) - On-Device Learning with Binary Neural Networks [2.7040098749051635]
We propose a CL solution that embraces the recent advancements in CL field and the efficiency of the Binary Neural Networks (BNN)
The choice of a binary network as backbone is essential to meet the constraints of low power devices.
arXiv Detail & Related papers (2023-08-29T13:48:35Z) - The Cascaded Forward Algorithm for Neural Network Training [61.06444586991505]
We propose a new learning framework for neural networks, namely Cascaded Forward (CaFo) algorithm, which does not rely on BP optimization as that in FF.
Unlike FF, our framework directly outputs label distributions at each cascaded block, which does not require generation of additional negative samples.
In our framework each block can be trained independently, so it can be easily deployed into parallel acceleration systems.
arXiv Detail & Related papers (2023-03-17T02:01:11Z) - Globally Optimal Training of Neural Networks with Threshold Activation
Functions [63.03759813952481]
We study weight decay regularized training problems of deep neural networks with threshold activations.
We derive a simplified convex optimization formulation when the dataset can be shattered at a certain layer of the network.
arXiv Detail & Related papers (2023-03-06T18:59:13Z) - Scaling Qubit Readout with Hardware Efficient Machine Learning
Architectures [0.0]
We propose a scalable approach to improve qubit-state discrimination by using a hierarchy of matched filters in conjunction with a significantly smaller and scalable neural network for qubit-state discrimination.
We achieve substantially higher readout accuracies (16.4% relative improvement) than the baseline with a scalable design that can be readily implemented on off-the-shelf FPGAs.
arXiv Detail & Related papers (2022-12-07T19:00:09Z) - Enhancing Qubit Readout with Autoencoders [36.136619420474766]
This work proposes a novel readout classification method for superconducting qubits based on a neural network pre-trained with an autoencoder approach.
We demonstrate that this method can enhance classification performance, particularly for short and long time measurements.
arXiv Detail & Related papers (2022-11-30T19:38:58Z) - Robust Training and Verification of Implicit Neural Networks: A
Non-Euclidean Contractive Approach [64.23331120621118]
This paper proposes a theoretical and computational framework for training and robustness verification of implicit neural networks.
We introduce a related embedded network and show that the embedded network can be used to provide an $ell_infty$-norm box over-approximation of the reachable sets of the original network.
We apply our algorithms to train implicit neural networks on the MNIST dataset and compare the robustness of our models with the models trained via existing approaches in the literature.
arXiv Detail & Related papers (2022-08-08T03:13:24Z) - Large-Scale Sequential Learning for Recommender and Engineering Systems [91.3755431537592]
In this thesis, we focus on the design of an automatic algorithms that provide personalized ranking by adapting to the current conditions.
For the former, we propose novel algorithm called SAROS that take into account both kinds of feedback for learning over the sequence of interactions.
The proposed idea of taking into account the neighbour lines shows statistically significant results in comparison with the initial approach for faults detection in power grid.
arXiv Detail & Related papers (2022-05-13T21:09:41Z) - SignalNet: A Low Resolution Sinusoid Decomposition and Estimation
Network [79.04274563889548]
We propose SignalNet, a neural network architecture that detects the number of sinusoids and estimates their parameters from quantized in-phase and quadrature samples.
We introduce a worst-case learning threshold for comparing the results of our network relative to the underlying data distributions.
In simulation, we find that our algorithm is always able to surpass the threshold for three-bit data but often cannot exceed the threshold for one-bit data.
arXiv Detail & Related papers (2021-06-10T04:21:20Z) - Learning Frequency Domain Approximation for Binary Neural Networks [68.79904499480025]
We propose to estimate the gradient of sign function in the Fourier frequency domain using the combination of sine functions for training BNNs.
The experiments on several benchmark datasets and neural architectures illustrate that the binary network learned using our method achieves the state-of-the-art accuracy.
arXiv Detail & Related papers (2021-03-01T08:25:26Z) - Deep Neural Network Discrimination of Multiplexed Superconducting Qubit
States [39.26291658500249]
We present multi-qubit readout using neural networks as state discriminators.
We find that fully-connected feed neural networks increase the qubit-state-assignment fidelity for our system.
arXiv Detail & Related papers (2021-02-24T19:00:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.