Quantifying scrambling in quantum neural networks
- URL: http://arxiv.org/abs/2112.01440v2
- Date: Fri, 7 Jan 2022 20:44:30 GMT
- Title: Quantifying scrambling in quantum neural networks
- Authors: Roy J. Garcia, Kaifeng Bu, Arthur Jaffe
- Abstract summary: We characterize a quantum neural network's error in terms of the network's scrambling properties via the out-of-time-ordered correlator.
Our results pave the way for the exploration of quantum chaos in quantum neural networks.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We characterize a quantum neural network's error in terms of the network's
scrambling properties via the out-of-time-ordered correlator. A network can be
trained by optimizing either a loss function or a cost function. We show that,
with some probability, both functions can be bounded by out-of-time-ordered
correlators. The gradients of these functions can be bounded by the gradient of
the out-of-time-ordered correlator, demonstrating that the network's scrambling
ability governs its trainability. Our results pave the way for the exploration
of quantum chaos in quantum neural networks.
Related papers
- CTRQNets & LQNets: Continuous Time Recurrent and Liquid Quantum Neural Networks [76.53016529061821]
Liquid Quantum Neural Network (LQNet) and Continuous Time Recurrent Quantum Neural Network (CTRQNet) developed.
LQNet and CTRQNet achieve accuracy increases as high as 40% on CIFAR 10 through binary classification.
arXiv Detail & Related papers (2024-08-28T00:56:03Z) - Semantic Strengthening of Neuro-Symbolic Learning [85.6195120593625]
Neuro-symbolic approaches typically resort to fuzzy approximations of a probabilistic objective.
We show how to compute this efficiently for tractable circuits.
We test our approach on three tasks: predicting a minimum-cost path in Warcraft, predicting a minimum-cost perfect matching, and solving Sudoku puzzles.
arXiv Detail & Related papers (2023-02-28T00:04:22Z) - On the explainability of quantum neural networks based on variational quantum circuits [0.0]
Ridge functions are used to describe and study the lower bound of the approximation done by the neural networks.
We show that quantum neural networks based on variational quantum circuits can be written as a linear combination of ridge functions.
arXiv Detail & Related papers (2023-01-12T18:46:28Z) - QuanGCN: Noise-Adaptive Training for Robust Quantum Graph Convolutional
Networks [124.7972093110732]
We propose quantum graph convolutional networks (QuanGCN), which learns the local message passing among nodes with the sequence of crossing-gate quantum operations.
To mitigate the inherent noises from modern quantum devices, we apply sparse constraint to sparsify the nodes' connections.
Our QuanGCN is functionally comparable or even superior than the classical algorithms on several benchmark graph datasets.
arXiv Detail & Related papers (2022-11-09T21:43:16Z) - Rapid training of quantum recurrent neural network [26.087244189340858]
We propose a Quantum Recurrent Neural Network (QRNN) to address these obstacles.
The design of the network is based on the continuous-variable quantum computing paradigm.
Our numerical simulations show that the QRNN converges to optimal weights in fewer epochs than the classical network.
arXiv Detail & Related papers (2022-07-01T12:29:33Z) - Analytic theory for the dynamics of wide quantum neural networks [7.636414695095235]
We study the dynamics of gradient descent for the training error of a class of variational quantum machine learning models.
For random quantum circuits, we predict and characterize an exponential decay of the residual training error as a function of the parameters of the system.
arXiv Detail & Related papers (2022-03-30T23:24:06Z) - Quantum activation functions for quantum neural networks [0.0]
We show how to approximate any analytic function to any required accuracy without the need to measure the states encoding the information.
Our results recast the science of artificial neural networks in the architecture of gate-model quantum computers.
arXiv Detail & Related papers (2022-01-10T23:55:49Z) - A quantum algorithm for training wide and deep classical neural networks [72.2614468437919]
We show that conditions amenable to classical trainability via gradient descent coincide with those necessary for efficiently solving quantum linear systems.
We numerically demonstrate that the MNIST image dataset satisfies such conditions.
We provide empirical evidence for $O(log n)$ training of a convolutional neural network with pooling.
arXiv Detail & Related papers (2021-07-19T23:41:03Z) - Quantum neural networks with multi-qubit potentials [0.0]
We show that the presence of multi-qubit potentials in the quantum perceptrons enables more efficient information processing tasks.
This simplification in the network architecture paves the way to address the connectivity challenge to scale up a quantum neural network.
arXiv Detail & Related papers (2021-05-06T15:30:06Z) - The Connection Between Approximation, Depth Separation and Learnability
in Neural Networks [70.55686685872008]
We study the connection between learnability and approximation capacity.
We show that learnability with deep networks of a target function depends on the ability of simpler classes to approximate the target.
arXiv Detail & Related papers (2021-01-31T11:32:30Z) - Machine learning transfer efficiencies for noisy quantum walks [62.997667081978825]
We show that the process of finding requirements on both a graph type and a quantum system coherence can be automated.
The automation is done by using a convolutional neural network of a particular type that learns to understand with which network and under which coherence requirements quantum advantage is possible.
Our results are of importance for demonstration of advantage in quantum experiments and pave the way towards automating scientific research and discoveries.
arXiv Detail & Related papers (2020-01-15T18:36:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.