Neural-network-based parameter estimation for quantum detection
- URL: http://arxiv.org/abs/2012.07677v2
- Date: Thu, 12 Aug 2021 07:52:41 GMT
- Title: Neural-network-based parameter estimation for quantum detection
- Authors: Yue Ban, Javier Echanobe, Yongcheng Ding, Ricardo Puebla, and Jorge
Casanova
- Abstract summary: In the context of quantum detection schemes, neural networks find a natural playground.
We demonstrate that adequately trained neural networks enable to characterize a target with minimal knowledge of the underlying physical model.
We exemplify the method with a development for $171$Yb$+$ atomic sensors.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Artificial neural networks bridge input data into output results by
approximately encoding the function that relates them. This is achieved after
training the network with a collection of known inputs and results leading to
an adjustment of the neuron connections and biases. In the context of quantum
detection schemes, neural networks find a natural playground. In particular, in
the presence of a target, a quantum sensor delivers a response, i.e., the input
data, which can be subsequently processed by a neural network that outputs the
target features. We demonstrate that adequately trained neural networks enable
to characterize a target with minimal knowledge of the underlying physical
model, in regimes where the quantum sensor presents complex responses, and
under a significant shot noise due to a reduced number of measurements. We
exemplify the method with a development for $^{171}$Yb$^{+}$ atomic sensors.
However, our protocol is general, thus applicable to arbitrary quantum
detection scenarios.
Related papers
- Verified Neural Compressed Sensing [58.98637799432153]
We develop the first (to the best of our knowledge) provably correct neural networks for a precise computational task.
We show that for modest problem dimensions (up to 50), we can train neural networks that provably recover a sparse vector from linear and binarized linear measurements.
We show that the complexity of the network can be adapted to the problem difficulty and solve problems where traditional compressed sensing methods are not known to provably work.
arXiv Detail & Related papers (2024-05-07T12:20:12Z) - Echo-evolution data generation for quantum error mitigation via neural
networks [0.0]
We propose a physics-motivated method to generate training data for quantum error mitigation via neural networks.
Under this method, the initial state evolves forward and backward in time, returning to the initial state at the end of evolution.
We demonstrate that a feed-forward fully connected neural network trained on echo-evolution-generated data can correct results of forward-in-time evolution.
arXiv Detail & Related papers (2023-11-01T12:40:10Z) - Quantum Process Learning Through Neural Emulation [3.7228085662092845]
We introduce a neural network that emulates the unknown process by constructing an internal representation of the input ensemble.
We show that our model exhibits high accuracy in applications to quantum computing, quantum photonics, and quantum many-body physics.
arXiv Detail & Related papers (2023-08-17T06:53:58Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Quantum activation functions for quantum neural networks [0.0]
We show how to approximate any analytic function to any required accuracy without the need to measure the states encoding the information.
Our results recast the science of artificial neural networks in the architecture of gate-model quantum computers.
arXiv Detail & Related papers (2022-01-10T23:55:49Z) - A quantum algorithm for training wide and deep classical neural networks [72.2614468437919]
We show that conditions amenable to classical trainability via gradient descent coincide with those necessary for efficiently solving quantum linear systems.
We numerically demonstrate that the MNIST image dataset satisfies such conditions.
We provide empirical evidence for $O(log n)$ training of a convolutional neural network with pooling.
arXiv Detail & Related papers (2021-07-19T23:41:03Z) - SignalNet: A Low Resolution Sinusoid Decomposition and Estimation
Network [79.04274563889548]
We propose SignalNet, a neural network architecture that detects the number of sinusoids and estimates their parameters from quantized in-phase and quadrature samples.
We introduce a worst-case learning threshold for comparing the results of our network relative to the underlying data distributions.
In simulation, we find that our algorithm is always able to surpass the threshold for three-bit data but often cannot exceed the threshold for one-bit data.
arXiv Detail & Related papers (2021-06-10T04:21:20Z) - Performance Bounds for Neural Network Estimators: Applications in Fault
Detection [2.388501293246858]
We exploit recent results in quantifying the robustness of neural networks to construct and tune a model-based anomaly detector.
In tuning, we specifically provide upper bounds on the rate of false alarms expected under normal operation.
arXiv Detail & Related papers (2021-03-22T19:23:08Z) - The Hintons in your Neural Network: a Quantum Field Theory View of Deep
Learning [84.33745072274942]
We show how to represent linear and non-linear layers as unitary quantum gates, and interpret the fundamental excitations of the quantum model as particles.
On top of opening a new perspective and techniques for studying neural networks, the quantum formulation is well suited for optical quantum computing.
arXiv Detail & Related papers (2021-03-08T17:24:29Z) - Variational learning for quantum artificial neural networks [0.0]
We first review a series of recent works describing the implementation of artificial neurons and feed-forward neural networks on quantum processors.
We then present an original realization of efficient individual quantum nodes based on variational unsampling protocols.
While keeping full compatibility with the overall memory-efficient feed-forward architecture, our constructions effectively reduce the quantum circuit depth required to determine the activation probability of single neurons.
arXiv Detail & Related papers (2021-03-03T16:10:15Z) - Decentralizing Feature Extraction with Quantum Convolutional Neural
Network for Automatic Speech Recognition [101.69873988328808]
We build upon a quantum convolutional neural network (QCNN) composed of a quantum circuit encoder for feature extraction.
An input speech is first up-streamed to a quantum computing server to extract Mel-spectrogram.
The corresponding convolutional features are encoded using a quantum circuit algorithm with random parameters.
The encoded features are then down-streamed to the local RNN model for the final recognition.
arXiv Detail & Related papers (2020-10-26T03:36:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.