Implementing arbitrary quantum operations via quantum walks on a cycle
graph
- URL: http://arxiv.org/abs/2304.05672v2
- Date: Thu, 13 Apr 2023 12:13:23 GMT
- Title: Implementing arbitrary quantum operations via quantum walks on a cycle
graph
- Authors: Jia-Yi Lin, Xin-Yu Li, Yu-Hao Shao, Wei Wang and Shengjun Wu
- Abstract summary: We use a simple discrete-time quantum walk (DTQW) on a cycle graph to model an arbitrary unitary operation.
Our model is essentially a quantum neural network based on DTQW.
Our work shows the capability of the DTQW-based neural network in quantum computation and its potential in laboratory implementations.
- Score: 9.463363607207679
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The quantum circuit model is the most commonly used model for implementing
quantum computers and quantum neural networks whose essential tasks are to
realize certain unitary operations. The circuit model usually implements a
desired unitary operation by a sequence of single-qubit and two-qubit unitary
gates from a universal set. Although this certainly facilitates the
experimentalists as they only need to prepare several different kinds of
universal gates, the number of gates required to implement an arbitrary desired
unitary operation is usually large. Hence the efficiency in terms of the
circuit depth or running time is not guaranteed. Here we propose an alternative
approach; we use a simple discrete-time quantum walk (DTQW) on a cycle graph to
model an arbitrary unitary operation without the need to decompose it into a
sequence of gates of smaller sizes. Our model is essentially a quantum neural
network based on DTQW. Firstly, it is universal as we show that any unitary
operation can be realized via an appropriate choice of coin operators.
Secondly, our DTQW-based neural network can be updated efficiently via a
learning algorithm, i.e., a modified stochastic gradient descent algorithm
adapted to our network. By training this network, one can promisingly find
approximations to arbitrary desired unitary operations. With an additional
measurement on the output, the DTQW-based neural network can also implement
general measurements described by positive-operator-valued measures (POVMs). We
show its capacity in implementing arbitrary 2-outcome POVM measurements via
numeric simulation. We further demonstrate that the network can be simplified
and can overcome device noises during the training so that it becomes more
friendly for laboratory implementations. Our work shows the capability of the
DTQW-based neural network in quantum computation and its potential in
laboratory implementations.
Related papers
- Efficient Learning for Linear Properties of Bounded-Gate Quantum Circuits [63.733312560668274]
Given a quantum circuit containing d tunable RZ gates and G-d Clifford gates, can a learner perform purely classical inference to efficiently predict its linear properties?
We prove that the sample complexity scaling linearly in d is necessary and sufficient to achieve a small prediction error, while the corresponding computational complexity may scale exponentially in d.
We devise a kernel-based learning model capable of trading off prediction error and computational complexity, transitioning from exponential to scaling in many practical settings.
arXiv Detail & Related papers (2024-08-22T08:21:28Z) - Challenges and opportunities in the supervised learning of quantum
circuit outputs [0.0]
Deep neural networks have proven capable of predicting some output properties of relevant random quantum circuits.
We investigate if and to what extent neural networks can learn to predict the output expectation values of circuits often employed in variational quantum algorithms.
arXiv Detail & Related papers (2024-02-07T16:10:13Z) - A Quantum Optical Recurrent Neural Network for Online Processing of
Quantum Times Series [0.7087237546722617]
We show that a quantum optical recurrent neural network (QORNN) can enhance the transmission rate of quantum channels.
We also show that our model can counteract similar memory effects if they are unwanted.
We run a small-scale version of this last task on the photonic processor Borealis.
arXiv Detail & Related papers (2023-05-31T19:19:25Z) - Implementing arbitrary quantum operations via quantum walks on a cycle
graph [8.820803742534677]
We use a simple discrete-time quantum walk (DTQW) on a cycle graph to model an arbitrary unitary operation $U(N)$.
Our model is essentially a quantum neural network based on DTQW.
arXiv Detail & Related papers (2022-10-26T03:51:46Z) - Quantum neural networks [0.0]
This thesis combines two of the most exciting research areas of the last decades: quantum computing and machine learning.
We introduce dissipative quantum neural networks (DQNNs), which are capable of universal quantum computation and have low memory requirements while training.
arXiv Detail & Related papers (2022-05-17T07:47:00Z) - Optimizing Tensor Network Contraction Using Reinforcement Learning [86.05566365115729]
We propose a Reinforcement Learning (RL) approach combined with Graph Neural Networks (GNN) to address the contraction ordering problem.
The problem is extremely challenging due to the huge search space, the heavy-tailed reward distribution, and the challenging credit assignment.
We show how a carefully implemented RL-agent that uses a GNN as the basic policy construct can address these challenges.
arXiv Detail & Related papers (2022-04-18T21:45:13Z) - Pretraining Graph Neural Networks for few-shot Analog Circuit Modeling
and Design [68.1682448368636]
We present a supervised pretraining approach to learn circuit representations that can be adapted to new unseen topologies or unseen prediction tasks.
To cope with the variable topological structure of different circuits we describe each circuit as a graph and use graph neural networks (GNNs) to learn node embeddings.
We show that pretraining GNNs on prediction of output node voltages can encourage learning representations that can be adapted to new unseen topologies or prediction of new circuit level properties.
arXiv Detail & Related papers (2022-03-29T21:18:47Z) - Cluster-Promoting Quantization with Bit-Drop for Minimizing Network
Quantization Loss [61.26793005355441]
Cluster-Promoting Quantization (CPQ) finds the optimal quantization grids for neural networks.
DropBits is a new bit-drop technique that revises the standard dropout regularization to randomly drop bits instead of neurons.
We experimentally validate our method on various benchmark datasets and network architectures.
arXiv Detail & Related papers (2021-09-05T15:15:07Z) - A quantum algorithm for training wide and deep classical neural networks [72.2614468437919]
We show that conditions amenable to classical trainability via gradient descent coincide with those necessary for efficiently solving quantum linear systems.
We numerically demonstrate that the MNIST image dataset satisfies such conditions.
We provide empirical evidence for $O(log n)$ training of a convolutional neural network with pooling.
arXiv Detail & Related papers (2021-07-19T23:41:03Z) - Quantized Neural Networks via {-1, +1} Encoding Decomposition and
Acceleration [83.84684675841167]
We propose a novel encoding scheme using -1, +1 to decompose quantized neural networks (QNNs) into multi-branch binary networks.
We validate the effectiveness of our method on large-scale image classification, object detection, and semantic segmentation tasks.
arXiv Detail & Related papers (2021-06-18T03:11:15Z) - 2D Qubit Placement of Quantum Circuits using LONGPATH [1.6631602844999722]
Two algorithms are proposed to optimize the number of SWAP gates in any arbitrary quantum circuit.
Our approach has a significant reduction in number of SWAP gates in 1D and 2D NTC architecture.
arXiv Detail & Related papers (2020-07-14T04:09:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.