Benchmarking quantum tomography completeness and fidelity with machine
learning
- URL: http://arxiv.org/abs/2103.01535v4
- Date: Sat, 23 Oct 2021 05:46:21 GMT
- Title: Benchmarking quantum tomography completeness and fidelity with machine
learning
- Authors: Yong Siah Teo, Seongwook Shin, Hyunseok Jeong, Yosep Kim, Yoon-Ho Kim,
Gleb I. Struchalin, Egor V. Kovlakov, Stanislav S. Straupe, Sergei P. Kulik,
Gerd Leuchs, Luis L. Sanchez-Soto
- Abstract summary: We train convolutional neural networks to predict whether or not a set of measurements is informationally complete to uniquely reconstruct any given quantum state with no prior information.
Networks are trained to recognize the fidelity and a reliable measure for informational completeness.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We train convolutional neural networks to predict whether or not a set of
measurements is informationally complete to uniquely reconstruct any given
quantum state with no prior information. In addition, we perform fidelity
benchmarking based on this measurement set without explicitly carrying out
state tomography. The networks are trained to recognize the fidelity and a
reliable measure for informational completeness. By gradually accumulating
measurements and data, these trained convolutional networks can efficiently
establish a compressive quantum-state characterization scheme by accelerating
runtime computation and greatly reducing systematic drifts in experiments. We
confirm the potential of this machine-learning approach by presenting
experimental results for both spatial-mode and multiphoton systems of large
dimensions. These predictions are further shown to improve when the networks
are trained with additional bootstrapped training sets from real experimental
data. Using a realistic beam-profile displacement error model for
Hermite-Gaussian sources, we further demonstrate numerically that the
orders-of-magnitude reduction in certification time with trained networks
greatly increases the computation yield of a large-scale quantum processor
using these sources, before state fidelity deteriorates significantly.
Related papers
- Dissipation-driven quantum generative adversarial networks [11.833077116494929]
We introduce a novel dissipation-driven quantum generative adversarial network (DQGAN) architecture specifically tailored for generating classical data.
The classical data is encoded into the input qubits of the input layer via strong tailored dissipation processes.
We extract both the generated data and the classification results by measuring the observables of the steady state of the output qubits.
arXiv Detail & Related papers (2024-08-28T07:41:58Z) - Deep Neural Network-assisted improvement of quantum compressed sensing tomography [0.0]
We propose a Deep Neural Network-based post-processing to improve the initial reconstruction provided by compressed sensing.
The idea is to treat the estimated state as a noisy input for the network and perform a deep-supervised denoising task.
We demonstrate through numerical experiments the improvement obtained by the denoising process and exploit the possibility of looping the inference scheme.
arXiv Detail & Related papers (2024-05-16T12:41:25Z) - Physics-informed neural networks for gravity currents reconstruction
from limited data [0.0]
The present work investigates the use of physics-informed neural networks (PINNs) for the 3D reconstruction of unsteady gravity currents from limited data.
In the PINN context, the flow fields are reconstructed by training a neural network whose objective function penalizes the mismatch between the network predictions and the observed data.
arXiv Detail & Related papers (2022-11-03T11:27:29Z) - Neural network enhanced measurement efficiency for molecular
groundstates [63.36515347329037]
We adapt common neural network models to learn complex groundstate wavefunctions for several molecular qubit Hamiltonians.
We find that using a neural network model provides a robust improvement over using single-copy measurement outcomes alone to reconstruct observables.
arXiv Detail & Related papers (2022-06-30T17:45:05Z) - Quantum Compressive Sensing: Mathematical Machinery, Quantum Algorithms,
and Quantum Circuitry [10.286119086329762]
Compressive sensing is a protocol that facilitates reconstruction of large signals from relatively few measurements.
Recent efforts in the literature consider instead a data-driven approach, training tensor networks to learn the structure of signals of interest.
We present an alternative "quantum" protocol, in which the state of the tensor network is a quantum state over a set of entangled qubits.
arXiv Detail & Related papers (2022-04-27T16:20:28Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - SignalNet: A Low Resolution Sinusoid Decomposition and Estimation
Network [79.04274563889548]
We propose SignalNet, a neural network architecture that detects the number of sinusoids and estimates their parameters from quantized in-phase and quadrature samples.
We introduce a worst-case learning threshold for comparing the results of our network relative to the underlying data distributions.
In simulation, we find that our algorithm is always able to surpass the threshold for three-bit data but often cannot exceed the threshold for one-bit data.
arXiv Detail & Related papers (2021-06-10T04:21:20Z) - Towards Accurate Quantization and Pruning via Data-free Knowledge
Transfer [61.85316480370141]
We study data-free quantization and pruning by transferring knowledge from trained large networks to compact networks.
Our data-free compact networks achieve competitive accuracy to networks trained and fine-tuned with training data.
arXiv Detail & Related papers (2020-10-14T18:02:55Z) - Compressive sensing with un-trained neural networks: Gradient descent
finds the smoothest approximation [60.80172153614544]
Un-trained convolutional neural networks have emerged as highly successful tools for image recovery and restoration.
We show that an un-trained convolutional neural network can approximately reconstruct signals and images that are sufficiently structured, from a near minimal number of random measurements.
arXiv Detail & Related papers (2020-05-07T15:57:25Z) - Understanding the Effects of Data Parallelism and Sparsity on Neural
Network Training [126.49572353148262]
We study two factors in neural network training: data parallelism and sparsity.
Despite their promising benefits, understanding of their effects on neural network training remains elusive.
arXiv Detail & Related papers (2020-03-25T10:49:22Z) - On transfer learning of neural networks using bi-fidelity data for
uncertainty propagation [0.0]
We explore the application of transfer learning techniques using training data generated from both high- and low-fidelity models.
In the former approach, a neural network model mapping the inputs to the outputs of interest is trained based on the low-fidelity data.
The high-fidelity data is then used to adapt the parameters of the upper layer(s) of the low-fidelity network, or train a simpler neural network to map the output of the low-fidelity network to that of the high-fidelity model.
arXiv Detail & Related papers (2020-02-11T15:56:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.