Deep Neural Networks as the Semi-classical Limit of Topological Quantum Neural Networks: The problem of generalisation
- URL: http://arxiv.org/abs/2210.13741v2
- Date: Fri, 11 Oct 2024 05:08:39 GMT
- Title: Deep Neural Networks as the Semi-classical Limit of Topological Quantum Neural Networks: The problem of generalisation
- Authors: Antonino Marciano, Emanuele Zappala, Tommaso Torda, Matteo Lulli, Stefano Giagu, Chris Fields, Deen Chen, Filippo Fabrocini,
- Abstract summary: We propose using this framework to understand the problem of generalisation in Deep Neural Networks.
A framework of this kind explains the overfitting behavior of Deep Neural Networks during the training step and the corresponding generalisation capabilities.
We apply a novel algorithm we developed, showing that it obtains similar results to standard neural networks, but without the need for training.
- Score: 0.3871780652193725
- License:
- Abstract: Deep Neural Networks miss a principled model of their operation. A novel framework for supervised learning based on Topological Quantum Field Theory that looks particularly well suited for implementation on quantum processors has been recently explored. We propose using this framework to understand the problem of generalisation in Deep Neural Networks. More specifically, in this approach, Deep Neural Networks are viewed as the semi-classical limit of Topological Quantum Neural Networks. A framework of this kind explains the overfitting behavior of Deep Neural Networks during the training step and the corresponding generalisation capabilities. We explore the paradigmatic case of the perceptron, which we implement as the semiclassical limit of Topological Quantum Neural Networks. We apply a novel algorithm we developed, showing that it obtains similar results to standard neural networks, but without the need for training (optimisation).
Related papers
- A General Approach to Dropout in Quantum Neural Networks [1.5771347525430772]
"Overfitting" is the phenomenon occurring when a given model learns the training data excessively well.
With the advent of Quantum Neural Networks as learning models, overfitting might soon become an issue.
arXiv Detail & Related papers (2023-10-06T09:39:30Z) - Gradient Descent in Neural Networks as Sequential Learning in RKBS [63.011641517977644]
We construct an exact power-series representation of the neural network in a finite neighborhood of the initial weights.
We prove that, regardless of width, the training sequence produced by gradient descent can be exactly replicated by regularized sequential learning.
arXiv Detail & Related papers (2023-02-01T03:18:07Z) - Extrapolation and Spectral Bias of Neural Nets with Hadamard Product: a
Polynomial Net Study [55.12108376616355]
The study on NTK has been devoted to typical neural network architectures, but is incomplete for neural networks with Hadamard products (NNs-Hp)
In this work, we derive the finite-width-K formulation for a special class of NNs-Hp, i.e., neural networks.
We prove their equivalence to the kernel regression predictor with the associated NTK, which expands the application scope of NTK.
arXiv Detail & Related papers (2022-09-16T06:36:06Z) - Rank Diminishing in Deep Neural Networks [71.03777954670323]
Rank of neural networks measures information flowing across layers.
It is an instance of a key structural condition that applies across broad domains of machine learning.
For neural networks, however, the intrinsic mechanism that yields low-rank structures remains vague and unclear.
arXiv Detail & Related papers (2022-06-13T12:03:32Z) - QDCNN: Quantum Dilated Convolutional Neural Network [1.52292571922932]
We propose a novel hybrid quantum-classical algorithm called quantum dilated convolutional neural networks (QDCNNs)
Our method extends the concept of dilated convolution, which has been widely applied in modern deep learning algorithms, to the context of hybrid neural networks.
The proposed QDCNNs are able to capture larger context during the quantum convolution process while reducing the computational cost.
arXiv Detail & Related papers (2021-10-29T10:24:34Z) - The Hintons in your Neural Network: a Quantum Field Theory View of Deep
Learning [84.33745072274942]
We show how to represent linear and non-linear layers as unitary quantum gates, and interpret the fundamental excitations of the quantum model as particles.
On top of opening a new perspective and techniques for studying neural networks, the quantum formulation is well suited for optical quantum computing.
arXiv Detail & Related papers (2021-03-08T17:24:29Z) - Mathematical Models of Overparameterized Neural Networks [25.329225766892126]
We will focus on the analysis of two-layer neural networks, and explain the key mathematical models.
We will then discuss challenges in understanding deep neural networks and some current research directions.
arXiv Detail & Related papers (2020-12-27T17:48:31Z) - Developing Constrained Neural Units Over Time [81.19349325749037]
This paper focuses on an alternative way of defining Neural Networks, that is different from the majority of existing approaches.
The structure of the neural architecture is defined by means of a special class of constraints that are extended also to the interaction with data.
The proposed theory is cast into the time domain, in which data are presented to the network in an ordered manner.
arXiv Detail & Related papers (2020-09-01T09:07:25Z) - Deep Neural Networks as the Semi-classical Limit of Quantum Neural
Networks [0.0]
Quantum Neural Networks (QNN) can be mapped onto spinnetworks.
Deep Neural Networks (DNN) are a subcase of QNN.
A number of Machine Learning (ML) key-concepts can be rephrased by using the terminology of Topological Quantum Field Theories (TQFT)
arXiv Detail & Related papers (2020-06-30T22:47:26Z) - Entanglement Classification via Neural Network Quantum States [58.720142291102135]
In this paper we combine machine-learning tools and the theory of quantum entanglement to perform entanglement classification for multipartite qubit systems in pure states.
We use a parameterisation of quantum systems using artificial neural networks in a restricted Boltzmann machine (RBM) architecture, known as Neural Network Quantum States (NNS)
arXiv Detail & Related papers (2019-12-31T07:40:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.