Deep Neural Network-assisted improvement of quantum compressed sensing tomography
- URL: http://arxiv.org/abs/2405.10052v1
- Date: Thu, 16 May 2024 12:41:25 GMT
- Title: Deep Neural Network-assisted improvement of quantum compressed sensing tomography
- Authors: Adriano Macarone-Palmieri, Leonardo Zambrano, Maciej Lewenstein, Antonio Acin, Donato Farina,
- Abstract summary: We propose a Deep Neural Network-based post-processing to improve the initial reconstruction provided by compressed sensing.
The idea is to treat the estimated state as a noisy input for the network and perform a deep-supervised denoising task.
We demonstrate through numerical experiments the improvement obtained by the denoising process and exploit the possibility of looping the inference scheme.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Quantum compressed sensing is the fundamental tool for low-rank density matrix tomographic reconstruction in the informationally incomplete case. We examine situations where the acquired information is not enough to allow one to obtain a precise compressed sensing reconstruction. In this scenario, we propose a Deep Neural Network-based post-processing to improve the initial reconstruction provided by compressed sensing. The idea is to treat the estimated state as a noisy input for the network and perform a deep-supervised denoising task. After the network is applied, a projection onto the space of feasible density matrices is performed to obtain an improved final state estimation. We demonstrate through numerical experiments the improvement obtained by the denoising process and exploit the possibility of looping the inference scheme to obtain further advantages. Finally, we test the resilience of the approach to out-of-distribution data.
Related papers
- Unrolled denoising networks provably learn optimal Bayesian inference [54.79172096306631]
We prove the first rigorous learning guarantees for neural networks based on unrolling approximate message passing (AMP)
For compressed sensing, we prove that when trained on data drawn from a product prior, the layers of the network converge to the same denoisers used in Bayes AMP.
arXiv Detail & Related papers (2024-09-19T17:56:16Z) - Enhanced quantum state preparation via stochastic prediction of neural
network [0.8287206589886881]
In this paper, we explore an intriguing avenue for enhancing algorithm effectiveness through exploiting the knowledge blindness of neural network.
Our approach centers around a machine learning algorithm utilized for preparing arbitrary quantum states in a semiconductor double quantum dot system.
By leveraging prediction generated by the neural network, we are able to guide the optimization process to escape local optima.
arXiv Detail & Related papers (2023-07-27T09:11:53Z) - One-Bit Compressive Sensing: Can We Go Deep and Blind? [15.231885712212083]
One-bit compressive sensing is concerned with the accurate recovery of an underlying sparse signal of interest from one-bit noisy measurements.
We present a novel data-driven and model-based methodology that achieves blind recovery.
arXiv Detail & Related papers (2022-03-13T16:06:56Z) - Optimization-Based Separations for Neural Networks [57.875347246373956]
We show that gradient descent can efficiently learn ball indicator functions using a depth 2 neural network with two layers of sigmoidal activations.
This is the first optimization-based separation result where the approximation benefits of the stronger architecture provably manifest in practice.
arXiv Detail & Related papers (2021-12-04T18:07:47Z) - Is Deep Image Prior in Need of a Good Education? [57.3399060347311]
Deep image prior was introduced as an effective prior for image reconstruction.
Despite its impressive reconstructive properties, the approach is slow when compared to learned or traditional reconstruction techniques.
We develop a two-stage learning paradigm to address the computational challenge.
arXiv Detail & Related papers (2021-11-23T15:08:26Z) - SignalNet: A Low Resolution Sinusoid Decomposition and Estimation
Network [79.04274563889548]
We propose SignalNet, a neural network architecture that detects the number of sinusoids and estimates their parameters from quantized in-phase and quadrature samples.
We introduce a worst-case learning threshold for comparing the results of our network relative to the underlying data distributions.
In simulation, we find that our algorithm is always able to surpass the threshold for three-bit data but often cannot exceed the threshold for one-bit data.
arXiv Detail & Related papers (2021-06-10T04:21:20Z) - Quantized Proximal Averaging Network for Analysis Sparse Coding [23.080395291046408]
We unfold an iterative algorithm into a trainable network that facilitates learning sparsity prior to quantization.
We demonstrate applications to compressed image recovery and magnetic resonance image reconstruction.
arXiv Detail & Related papers (2021-05-13T12:05:35Z) - Benchmarking quantum tomography completeness and fidelity with machine
learning [0.0]
We train convolutional neural networks to predict whether or not a set of measurements is informationally complete to uniquely reconstruct any given quantum state with no prior information.
Networks are trained to recognize the fidelity and a reliable measure for informational completeness.
arXiv Detail & Related papers (2021-03-02T07:30:32Z) - Uncertainty Quantification in Deep Residual Neural Networks [0.0]
Uncertainty quantification is an important and challenging problem in deep learning.
Previous methods rely on dropout layers which are not present in modern deep architectures or batch normalization which is sensitive to batch sizes.
We show that training residual networks using depth can be interpreted as a variational approximation to the posterior weights in neural networks.
arXiv Detail & Related papers (2020-07-09T16:05:37Z) - Compressive sensing with un-trained neural networks: Gradient descent
finds the smoothest approximation [60.80172153614544]
Un-trained convolutional neural networks have emerged as highly successful tools for image recovery and restoration.
We show that an un-trained convolutional neural network can approximately reconstruct signals and images that are sufficiently structured, from a near minimal number of random measurements.
arXiv Detail & Related papers (2020-05-07T15:57:25Z) - Beyond Dropout: Feature Map Distortion to Regularize Deep Neural
Networks [107.77595511218429]
In this paper, we investigate the empirical Rademacher complexity related to intermediate layers of deep neural networks.
We propose a feature distortion method (Disout) for addressing the aforementioned problem.
The superiority of the proposed feature map distortion for producing deep neural network with higher testing performance is analyzed and demonstrated.
arXiv Detail & Related papers (2020-02-23T13:59:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.