Learning a quantum computer's capability using convolutional neural
networks
- URL: http://arxiv.org/abs/2304.10650v1
- Date: Thu, 20 Apr 2023 21:25:33 GMT
- Title: Learning a quantum computer's capability using convolutional neural
networks
- Authors: Daniel Hothem, Kevin Young, Tommie Catanach, and Timothy Proctor
- Abstract summary: We investigate using artificial neural networks to learn an approximation to a processor's capability function.
We show that convolutional neural networks can accurately model a processor's capability when that processor experiences gate-dependent, time-dependent, and context-dependent errors.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The computational power of contemporary quantum processors is limited by
hardware errors that cause computations to fail. In principle, each quantum
processor's computational capabilities can be described with a capability
function that quantifies how well a processor can run each possible quantum
circuit (i.e., program), as a map from circuits to the processor's success
rates on those circuits. However, capability functions are typically unknown
and challenging to model, as the particular errors afflicting a specific
quantum processor are a priori unknown and difficult to completely
characterize. In this work, we investigate using artificial neural networks to
learn an approximation to a processor's capability function. We explore how to
define the capability function, and we explain how data for training neural
networks can be efficiently obtained for a capability function defined using
process fidelity. We then investigate using convolutional neural networks to
model a quantum computer's capability. Using simulations, we show that
convolutional neural networks can accurately model a processor's capability
when that processor experiences gate-dependent, time-dependent, and
context-dependent stochastic errors. We then discuss some challenges to
creating useful neural network capability models for experimental processors,
such as generalizing beyond training distributions and modelling the effects of
coherent errors. Lastly, we apply our neural networks to model the capabilities
of cloud-access quantum computing systems, obtaining moderate prediction
accuracy (average absolute error around 2-5%).
Related papers
- Quantum Neural Network for Quantum Neural Computing [0.0]
We propose a new quantum neural network model for quantum neural computing.
Our model circumvents the problem that the state-space size grows exponentially with the number of neurons.
We benchmark our model for handwritten digit recognition and other nonlinear classification tasks.
arXiv Detail & Related papers (2023-05-15T11:16:47Z) - Realization of a quantum neural network using repeat-until-success
circuits in a superconducting quantum processor [0.0]
In this paper, we use repeat-until-success circuits enabled by real-time control-flow feedback to realize quantum neurons with non-linear activation functions.
As an example, we construct a minimal feedforward quantum neural network capable of learning all 2-to-1-bit Boolean functions.
This model is shown to perform non-linear classification and effectively learns from multiple copies of a single training state consisting of the maximal superposition of all inputs.
arXiv Detail & Related papers (2022-12-21T03:26:32Z) - A didactic approach to quantum machine learning with a single qubit [68.8204255655161]
We focus on the case of learning with a single qubit, using data re-uploading techniques.
We implement the different proposed formulations in toy and real-world datasets using the qiskit quantum computing SDK.
arXiv Detail & Related papers (2022-11-23T18:25:32Z) - Quantum activation functions for quantum neural networks [0.0]
We show how to approximate any analytic function to any required accuracy without the need to measure the states encoding the information.
Our results recast the science of artificial neural networks in the architecture of gate-model quantum computers.
arXiv Detail & Related papers (2022-01-10T23:55:49Z) - A quantum algorithm for training wide and deep classical neural networks [72.2614468437919]
We show that conditions amenable to classical trainability via gradient descent coincide with those necessary for efficiently solving quantum linear systems.
We numerically demonstrate that the MNIST image dataset satisfies such conditions.
We provide empirical evidence for $O(log n)$ training of a convolutional neural network with pooling.
arXiv Detail & Related papers (2021-07-19T23:41:03Z) - Quantum-tailored machine-learning characterization of a superconducting
qubit [50.591267188664666]
We develop an approach to characterize the dynamics of a quantum device and learn device parameters.
This approach outperforms physics-agnostic recurrent neural networks trained on numerically generated and experimental data.
This demonstration shows how leveraging domain knowledge improves the accuracy and efficiency of this characterization task.
arXiv Detail & Related papers (2021-06-24T15:58:57Z) - Quantum neural networks with deep residual learning [29.929891641757273]
In this paper, a novel quantum neural network with deep residual learning (ResQNN) is proposed.
Our ResQNN is able to learn an unknown unitary and get remarkable performance.
arXiv Detail & Related papers (2020-12-14T18:11:07Z) - Quantum Deformed Neural Networks [83.71196337378022]
We develop a new quantum neural network layer designed to run efficiently on a quantum computer.
It can be simulated on a classical computer when restricted in the way it entangles input states.
arXiv Detail & Related papers (2020-10-21T09:46:12Z) - Reservoir Memory Machines as Neural Computers [70.5993855765376]
Differentiable neural computers extend artificial neural networks with an explicit memory without interference.
We achieve some of the computational capabilities of differentiable neural computers with a model that can be trained very efficiently.
arXiv Detail & Related papers (2020-09-14T12:01:30Z) - Machine learning transfer efficiencies for noisy quantum walks [62.997667081978825]
We show that the process of finding requirements on both a graph type and a quantum system coherence can be automated.
The automation is done by using a convolutional neural network of a particular type that learns to understand with which network and under which coherence requirements quantum advantage is possible.
Our results are of importance for demonstration of advantage in quantum experiments and pave the way towards automating scientific research and discoveries.
arXiv Detail & Related papers (2020-01-15T18:36:53Z) - Quantum implementation of an artificial feed-forward neural network [0.0]
We show an experimental realization of an artificial feed-forward neural network implemented on a state-of-art superconducting quantum processor.
The network is made of quantum artificial neurons, which individually display a potential advantage in storage capacity.
We demonstrate that this network can be equivalently operated either via classical control or in a completely coherent fashion.
arXiv Detail & Related papers (2019-12-28T16:49:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.