The exact evaluation of hexagonal spin-networks and topological quantum
neural networks
- URL: http://arxiv.org/abs/2310.03632v2
- Date: Fri, 13 Oct 2023 03:14:32 GMT
- Title: The exact evaluation of hexagonal spin-networks and topological quantum
neural networks
- Authors: Matteo Lulli, Antonino Marciano and Emanuele Zappala
- Abstract summary: We introduce an algorithm for the evaluation of the physical scalar product between spin-networks.
We investigate the behavior of the evaluations on certain classes of spin-networks with the classical and quantum recoupling.
- Score: 0.5919433278490629
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The physical scalar product between spin-networks has been shown to be a
fundamental tool in the theory of topological quantum neural networks (TQNN),
which are quantum neural networks previously introduced by the authors in the
context of quantum machine learning. However, the effective evaluation of the
scalar product remains a bottleneck for the applicability of the theory. We
introduce an algorithm for the evaluation of the physical scalar product
defined by Noui and Perez between spin-network with hexagonal shape. By means
of recoupling theory and the properties of the Haar integration we obtain an
efficient algorithm, and provide several proofs regarding the main steps. We
investigate the behavior of the TQNN evaluations on certain classes of
spin-networks with the classical and quantum recoupling. All results can be
independently reproduced through the "idea.deploy"
framework~\href{https://github.com/lullimat/idea.deploy}{\nolinkurl{https://github.com/lullimat/idea.deploy}}
Related papers
- Neural network representation of quantum systems [0.0]
We provide a novel map with which a wide class of quantum mechanical systems can be cast into the form of a neural network.
Our findings bring machine learning closer to the quantum world.
arXiv Detail & Related papers (2024-03-18T02:20:22Z) - Analyzing Convergence in Quantum Neural Networks: Deviations from Neural
Tangent Kernels [20.53302002578558]
A quantum neural network (QNN) is a parameterized mapping efficiently implementable on near-term Noisy Intermediate-Scale Quantum (NISQ) computers.
Despite the existing empirical and theoretical investigations, the convergence of QNN training is not fully understood.
arXiv Detail & Related papers (2023-03-26T22:58:06Z) - Gradient Descent in Neural Networks as Sequential Learning in RKBS [63.011641517977644]
We construct an exact power-series representation of the neural network in a finite neighborhood of the initial weights.
We prove that, regardless of width, the training sequence produced by gradient descent can be exactly replicated by regularized sequential learning.
arXiv Detail & Related papers (2023-02-01T03:18:07Z) - QuanGCN: Noise-Adaptive Training for Robust Quantum Graph Convolutional
Networks [124.7972093110732]
We propose quantum graph convolutional networks (QuanGCN), which learns the local message passing among nodes with the sequence of crossing-gate quantum operations.
To mitigate the inherent noises from modern quantum devices, we apply sparse constraint to sparsify the nodes' connections.
Our QuanGCN is functionally comparable or even superior than the classical algorithms on several benchmark graph datasets.
arXiv Detail & Related papers (2022-11-09T21:43:16Z) - Robust Training and Verification of Implicit Neural Networks: A
Non-Euclidean Contractive Approach [64.23331120621118]
This paper proposes a theoretical and computational framework for training and robustness verification of implicit neural networks.
We introduce a related embedded network and show that the embedded network can be used to provide an $ell_infty$-norm box over-approximation of the reachable sets of the original network.
We apply our algorithms to train implicit neural networks on the MNIST dataset and compare the robustness of our models with the models trained via existing approaches in the literature.
arXiv Detail & Related papers (2022-08-08T03:13:24Z) - Why Quantization Improves Generalization: NTK of Binary Weight Neural
Networks [33.08636537654596]
We take the binary weights in a neural network as random variables under rounding, and study the distribution propagation over different layers in the neural network.
We propose a quasi neural network to approximate the distribution propagation, which is a neural network with continuous parameters and smooth activation function.
arXiv Detail & Related papers (2022-06-13T06:11:21Z) - Power and limitations of single-qubit native quantum neural networks [5.526775342940154]
Quantum neural networks (QNNs) have emerged as a leading strategy to establish applications in machine learning, chemistry, and optimization.
We formulate a theoretical framework for the expressive ability of data re-uploading quantum neural networks.
arXiv Detail & Related papers (2022-05-16T17:58:27Z) - On the Neural Tangent Kernel Analysis of Randomly Pruned Neural Networks [91.3755431537592]
We study how random pruning of the weights affects a neural network's neural kernel (NTK)
In particular, this work establishes an equivalence of the NTKs between a fully-connected neural network and its randomly pruned version.
arXiv Detail & Related papers (2022-03-27T15:22:19Z) - Why Lottery Ticket Wins? A Theoretical Perspective of Sample Complexity
on Pruned Neural Networks [79.74580058178594]
We analyze the performance of training a pruned neural network by analyzing the geometric structure of the objective function.
We show that the convex region near a desirable model with guaranteed generalization enlarges as the neural network model is pruned.
arXiv Detail & Related papers (2021-10-12T01:11:07Z) - A quantum algorithm for training wide and deep classical neural networks [72.2614468437919]
We show that conditions amenable to classical trainability via gradient descent coincide with those necessary for efficiently solving quantum linear systems.
We numerically demonstrate that the MNIST image dataset satisfies such conditions.
We provide empirical evidence for $O(log n)$ training of a convolutional neural network with pooling.
arXiv Detail & Related papers (2021-07-19T23:41:03Z) - Generalization Study of Quantum Neural Network [11.502747203515954]
Generalization is an important feature of neural network, and there have been many studies on it.
We studied a class of quantum neural network constructed by quantum gate.
Our model has better generalization than the classical neu-ral network with the same structure.
arXiv Detail & Related papers (2020-06-02T06:10:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.