Deep Neural Networks as the Semi-classical Limit of Quantum Neural
Networks
- URL: http://arxiv.org/abs/2007.00142v2
- Date: Thu, 12 Aug 2021 16:42:47 GMT
- Title: Deep Neural Networks as the Semi-classical Limit of Quantum Neural
Networks
- Authors: Antonino Marciano, Deen Chen, Filippo Fabrocini*, Chris Fields, Enrico
Greco*, Niels Gresnigt, Krid Jinklub, Matteo Lulli, Kostas Terzidis, and
Emanuele Zappala
- Abstract summary: Quantum Neural Networks (QNN) can be mapped onto spinnetworks.
Deep Neural Networks (DNN) are a subcase of QNN.
A number of Machine Learning (ML) key-concepts can be rephrased by using the terminology of Topological Quantum Field Theories (TQFT)
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Our work intends to show that: (1) Quantum Neural Networks (QNN) can be
mapped onto spinnetworks, with the consequence that the level of analysis of
their operation can be carried out on the side of Topological Quantum Field
Theories (TQFT); (2) Deep Neural Networks (DNN) are a subcase of QNN, in the
sense that they emerge as the semiclassical limit of QNN; (3) A number of
Machine Learning (ML) key-concepts can be rephrased by using the terminology of
TQFT. Our framework provides as well a working hypothesis for understanding the
generalization behavior of DNN, relating it to the topological features of the
graphs structures involved.
Related papers
- From Graphs to Qubits: A Critical Review of Quantum Graph Neural Networks [56.51893966016221]
Quantum Graph Neural Networks (QGNNs) represent a novel fusion of quantum computing and Graph Neural Networks (GNNs)
This paper critically reviews the state-of-the-art in QGNNs, exploring various architectures.
We discuss their applications across diverse fields such as high-energy physics, molecular chemistry, finance and earth sciences, highlighting the potential for quantum advantage.
arXiv Detail & Related papers (2024-08-12T22:53:14Z) - Deep Neural Networks via Complex Network Theory: a Perspective [3.1023851130450684]
Deep Neural Networks (DNNs) can be represented as graphs whose links and vertices iteratively process data and solve tasks sub-optimally. Complex Network Theory (CNT), merging statistical physics with graph theory, provides a method for interpreting neural networks by analysing their weights and neuron structures.
In this work, we extend the existing CNT metrics with measures that sample from the DNNs' training distribution, shifting from a purely topological analysis to one that connects with the interpretability of deep learning.
arXiv Detail & Related papers (2024-04-17T08:42:42Z) - QuanGCN: Noise-Adaptive Training for Robust Quantum Graph Convolutional
Networks [124.7972093110732]
We propose quantum graph convolutional networks (QuanGCN), which learns the local message passing among nodes with the sequence of crossing-gate quantum operations.
To mitigate the inherent noises from modern quantum devices, we apply sparse constraint to sparsify the nodes' connections.
Our QuanGCN is functionally comparable or even superior than the classical algorithms on several benchmark graph datasets.
arXiv Detail & Related papers (2022-11-09T21:43:16Z) - Deep Neural Networks as the Semi-classical Limit of Topological Quantum Neural Networks: The problem of generalisation [0.3871780652193725]
We propose using this framework to understand the problem of generalisation in Deep Neural Networks.
A framework of this kind explains the overfitting behavior of Deep Neural Networks during the training step and the corresponding generalisation capabilities.
We apply a novel algorithm we developed, showing that it obtains similar results to standard neural networks, but without the need for training.
arXiv Detail & Related papers (2022-10-25T03:14:59Z) - Extrapolation and Spectral Bias of Neural Nets with Hadamard Product: a
Polynomial Net Study [55.12108376616355]
The study on NTK has been devoted to typical neural network architectures, but is incomplete for neural networks with Hadamard products (NNs-Hp)
In this work, we derive the finite-width-K formulation for a special class of NNs-Hp, i.e., neural networks.
We prove their equivalence to the kernel regression predictor with the associated NTK, which expands the application scope of NTK.
arXiv Detail & Related papers (2022-09-16T06:36:06Z) - Quantum-inspired Complex Convolutional Neural Networks [17.65730040410185]
We improve the quantum-inspired neurons by exploiting the complex-valued weights which have richer representational capacity and better non-linearity.
We draw the models of quantum-inspired convolutional neural networks (QICNNs) capable of processing high-dimensional data.
The performance of classification accuracy of the five QICNNs are tested on the MNIST and CIFAR-10 datasets.
arXiv Detail & Related papers (2021-10-31T03:10:48Z) - Quantum-enhanced neural networks in the neural tangent kernel framework [0.4394730767364254]
We study a class of qcNN composed of a quantum data-encoder followed by a cNN.
In the NTK regime where the number nodes of the cNN becomes infinitely large, the output of the entire qcNN becomes a nonlinear function of the so-called projected quantum kernel.
arXiv Detail & Related papers (2021-09-08T17:16:23Z) - The Hintons in your Neural Network: a Quantum Field Theory View of Deep
Learning [84.33745072274942]
We show how to represent linear and non-linear layers as unitary quantum gates, and interpret the fundamental excitations of the quantum model as particles.
On top of opening a new perspective and techniques for studying neural networks, the quantum formulation is well suited for optical quantum computing.
arXiv Detail & Related papers (2021-03-08T17:24:29Z) - Toward Trainability of Quantum Neural Networks [87.04438831673063]
Quantum Neural Networks (QNNs) have been proposed as generalizations of classical neural networks to achieve the quantum speed-up.
Serious bottlenecks exist for training QNNs due to the vanishing with gradient rate exponential to the input qubit number.
We show that QNNs with tree tensor and step controlled structures for the application of binary classification. Simulations show faster convergent rates and better accuracy compared to QNNs with random structures.
arXiv Detail & Related papers (2020-11-12T08:32:04Z) - Statistical Tests and Confidential Intervals as Thresholds for Quantum
Neural Networks [0.0]
We analyze and construct the least square quantum neural network (LS-QNN), the corresponding quantum neural network (PI-QNN), the regression quantum neural network (PR-QNN) and chi-squared quantum neural network ($chi2$-QNN)
We use the solution or tests as the threshold for the corresponding training rules.
arXiv Detail & Related papers (2020-01-30T05:41:04Z) - Entanglement Classification via Neural Network Quantum States [58.720142291102135]
In this paper we combine machine-learning tools and the theory of quantum entanglement to perform entanglement classification for multipartite qubit systems in pure states.
We use a parameterisation of quantum systems using artificial neural networks in a restricted Boltzmann machine (RBM) architecture, known as Neural Network Quantum States (NNS)
arXiv Detail & Related papers (2019-12-31T07:40:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.