Classical Artificial Neural Network Training Using Quantum Walks as a
Search Procedure
- URL: http://arxiv.org/abs/2108.12448v2
- Date: Thu, 2 Sep 2021 23:30:01 GMT
- Title: Classical Artificial Neural Network Training Using Quantum Walks as a
Search Procedure
- Authors: Luciano S. de Souza, Jonathan H. A. de Carvalho, Tiago A. E. Ferreira
- Abstract summary: The goal of the procedure is to apply quantum walk as a search algorithm in a complete graph to find all synaptic weights of a classical artificial neural network.
Each of this complete graph represents a possible synaptic weight set in the $w$-dimensional search space, where $w$ is the number of weights of the neural network.
The proposed method was employed for a $XOR$ problem to prove the proposed concept.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper proposes a computational procedure that applies a quantum
algorithm to train classical artificial neural networks. The goal of the
procedure is to apply quantum walk as a search algorithm in a complete graph to
find all synaptic weights of a classical artificial neural network. Each vertex
of this complete graph represents a possible synaptic weight set in the
$w$-dimensional search space, where $w$ is the number of weights of the neural
network. To know the number of iterations required \textit{a priori} to obtain
the solutions is one of the main advantages of the procedure. Another advantage
is that the proposed method does not stagnate in local minimums. Thus, it is
possible to use the quantum walk search procedure as an alternative to the
backpropagation algorithm. The proposed method was employed for a $XOR$ problem
to prove the proposed concept. To solve this problem, the proposed method
trained a classical artificial neural network with nine weights. However, the
procedure can find solutions for any number of dimensions. The results achieved
demonstrate the viability of the proposal, contributing to machine learning and
quantum computing researches.
Related papers
- Neural Algorithmic Reasoning with Multiple Correct Solutions [16.045068056647676]
In some applications, it is desirable to recover more than one correct solution.
We demonstrate our method on two classical algorithms: Bellman-Ford (BF) and Depth-First Search (DFS)
This method involves generating appropriate training data as well as sampling and validating solutions from model output.
arXiv Detail & Related papers (2024-09-11T02:29:53Z) - Quantum Neuron Selection: Finding High Performing Subnetworks With
Quantum Algorithms [0.0]
Recently, it's been shown that large, randomly neural networks containworks that perform as well as fully trained models.
This insight offers a promising avenue for training future neural networks by simply pruning weights from large, random models.
In this paper, we explore how quantum algorithms could be formulated and applied to this neuron selection problem.
arXiv Detail & Related papers (2023-02-12T19:19:48Z) - Accelerating the training of single-layer binary neural networks using
the HHL quantum algorithm [58.720142291102135]
We show that useful information can be extracted from the quantum-mechanical implementation of Harrow-Hassidim-Lloyd (HHL)
This paper shows, however, that useful information can be extracted from the quantum-mechanical implementation of HHL, and used to reduce the complexity of finding the solution on the classical side.
arXiv Detail & Related papers (2022-10-23T11:58:05Z) - Robust Training and Verification of Implicit Neural Networks: A
Non-Euclidean Contractive Approach [64.23331120621118]
This paper proposes a theoretical and computational framework for training and robustness verification of implicit neural networks.
We introduce a related embedded network and show that the embedded network can be used to provide an $ell_infty$-norm box over-approximation of the reachable sets of the original network.
We apply our algorithms to train implicit neural networks on the MNIST dataset and compare the robustness of our models with the models trained via existing approaches in the literature.
arXiv Detail & Related papers (2022-08-08T03:13:24Z) - Natural evolutionary strategies applied to quantum-classical hybrid
neural networks [0.0]
We study an alternative method, called Natural Evolutionary Strategies (NES), which are a family of black box optimization algorithms.
We apply the NES method to the binary classification task, showing that this method is a viable alternative for training quantum neural networks.
arXiv Detail & Related papers (2022-05-17T02:14:44Z) - Quantum Walk to Train a Classical Artificial Neural Network [0.0]
This work proposes a procedure that uses a quantum walk in a complete graph to train classical artificial neural networks.
The methodology employed to train the neural network will adjust the synaptic weights of the output layer, not altering the weights of the hidden layer.
In addition to computational gain, another advantage of the proposed procedure is to be possible to know textita priori the number of iterations required to obtain the solutions.
arXiv Detail & Related papers (2021-09-01T00:36:52Z) - A quantum algorithm for training wide and deep classical neural networks [72.2614468437919]
We show that conditions amenable to classical trainability via gradient descent coincide with those necessary for efficiently solving quantum linear systems.
We numerically demonstrate that the MNIST image dataset satisfies such conditions.
We provide empirical evidence for $O(log n)$ training of a convolutional neural network with pooling.
arXiv Detail & Related papers (2021-07-19T23:41:03Z) - On Applying the Lackadaisical Quantum Walk Algorithm to Search for
Multiple Solutions on Grids [63.75363908696257]
The lackadaisical quantum walk is an algorithm developed to search graph structures whose vertices have a self-loop of weight $l$.
This paper addresses several issues related to applying the lackadaisical quantum walk to search for multiple solutions on grids successfully.
arXiv Detail & Related papers (2021-06-11T09:43:09Z) - Towards Optimally Efficient Tree Search with Deep Learning [76.64632985696237]
This paper investigates the classical integer least-squares problem which estimates signals integer from linear models.
The problem is NP-hard and often arises in diverse applications such as signal processing, bioinformatics, communications and machine learning.
We propose a general hyper-accelerated tree search (HATS) algorithm by employing a deep neural network to estimate the optimal estimation for the underlying simplified memory-bounded A* algorithm.
arXiv Detail & Related papers (2021-01-07T08:00:02Z) - Neural Thompson Sampling [94.82847209157494]
We propose a new algorithm, called Neural Thompson Sampling, which adapts deep neural networks for both exploration and exploitation.
At the core of our algorithm is a novel posterior distribution of the reward, where its mean is the neural network approximator, and its variance is built upon the neural tangent features of the corresponding neural network.
arXiv Detail & Related papers (2020-10-02T07:44:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.