Training via quantum superposition circumventing local minima and vanishing gradient of sinusoidal neural network
- URL: http://arxiv.org/abs/2410.22016v1
- Date: Tue, 29 Oct 2024 13:06:46 GMT
- Title: Training via quantum superposition circumventing local minima and vanishing gradient of sinusoidal neural network
- Authors: Zujin Wen, Jin-Long Huang, Oscar Dahlsten,
- Abstract summary: We present an algorithm for quantum training of deep neural networks (SinNNs)
The quantum training evolves an initially uniform superposition over weight values to one that is guaranteed to peak on the best weights.
We demonstrate the algorithm on toy examples and show that it indeed outperforms gradient descent in optimizing the loss function and outperforms brute force search in the time required.
- Score: 0.6021787236982659
- License:
- Abstract: Deep neural networks have been very successful in applications ranging from computer vision and natural language processing to strategy optimization in games. Recently neural networks with sinusoidal activation functions (SinNN) were found to be ideally suited for representing complex natural signals and their fine spatial and temporal details, which makes them effective representations of images, sound, and video, and good solvers of differential equations. However, training SinNN via gradient descent often results in bad local minima, posing a significant challenge when optimizing their weights. Furthermore, when the weights are discretized for better memory and inference efficiency on small devices, we find that a vanishing gradient problem appears on the resulting discrete SinNN (DSinNN). Brute force search provides an alternative way to find the best weights for DSinNN but is intractable for a large number of parameters. We here provide a qualitatively different training method: an algorithm for quantum training of DSinNNs. The quantum training evolves an initially uniform superposition over weight values to one that is guaranteed to peak on the best weights. We demonstrate the algorithm on toy examples and show that it indeed outperforms gradient descent in optimizing the loss function and outperforms brute force search in the time required.
Related papers
- Newton Losses: Using Curvature Information for Learning with Differentiable Algorithms [80.37846867546517]
We show how to train eight different neural networks with custom objectives.
We exploit their second-order information via their empirical Fisherssian matrices.
We apply Loss Lossiable algorithms to achieve significant improvements for less differentiable algorithms.
arXiv Detail & Related papers (2024-10-24T18:02:11Z) - Training Artificial Neural Networks by Coordinate Search Algorithm [0.20971479389679332]
We propose an efficient version of the gradient-free Coordinate Search (CS) algorithm for training neural networks.
The proposed algorithm can be used with non-differentiable activation functions and tailored to multi-objective/multi-loss problems.
Finding the optimal values for weights of ANNs is a large-scale optimization problem.
arXiv Detail & Related papers (2024-02-20T01:47:25Z) - Globally Optimal Training of Neural Networks with Threshold Activation
Functions [63.03759813952481]
We study weight decay regularized training problems of deep neural networks with threshold activations.
We derive a simplified convex optimization formulation when the dataset can be shattered at a certain layer of the network.
arXiv Detail & Related papers (2023-03-06T18:59:13Z) - Implicit Stochastic Gradient Descent for Training Physics-informed
Neural Networks [51.92362217307946]
Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems.
PINNs are trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features.
In this paper, we propose to employ implicit gradient descent (ISGD) method to train PINNs for improving the stability of training process.
arXiv Detail & Related papers (2023-03-03T08:17:47Z) - Navigating Local Minima in Quantized Spiking Neural Networks [3.1351527202068445]
Spiking and Quantized Neural Networks (NNs) are becoming exceedingly important for hyper-efficient implementations of Deep Learning (DL) algorithms.
These networks face challenges when trained using error backpropagation, due to the absence of gradient signals when applying hard thresholds.
This paper presents a systematic evaluation of a cosine-annealed LR schedule coupled with weight-independent adaptive moment estimation.
arXiv Detail & Related papers (2022-02-15T06:42:25Z) - A quantum algorithm for training wide and deep classical neural networks [72.2614468437919]
We show that conditions amenable to classical trainability via gradient descent coincide with those necessary for efficiently solving quantum linear systems.
We numerically demonstrate that the MNIST image dataset satisfies such conditions.
We provide empirical evidence for $O(log n)$ training of a convolutional neural network with pooling.
arXiv Detail & Related papers (2021-07-19T23:41:03Z) - Efficient training of physics-informed neural networks via importance
sampling [2.9005223064604078]
Physics-In Neural Networks (PINNs) are a class of deep neural networks that are trained to compute systems governed by partial differential equations (PDEs)
We show that an importance sampling approach will improve the convergence behavior of PINNs training.
arXiv Detail & Related papers (2021-04-26T02:45:10Z) - GradFreeBits: Gradient Free Bit Allocation for Dynamic Low Precision
Neural Networks [4.511923587827301]
Quantized neural networks (QNNs) are among the main approaches for deploying deep neural networks on low resource edge devices.
We propose GradFreeBits: a novel joint optimization scheme for training dynamic QNNs.
Our method achieves better or on par performance with current state of the art low precision neural networks on CIFAR10/100 and ImageNet classification.
arXiv Detail & Related papers (2021-02-18T12:18:09Z) - SiMaN: Sign-to-Magnitude Network Binarization [165.5630656849309]
We show that our weight binarization provides an analytical solution by encoding high-magnitude weights into +1s, and 0s otherwise.
We prove that the learned weights of binarized networks roughly follow a Laplacian distribution that does not allow entropy.
Our method, dubbed sign-to- neural network binarization (SiMaN), is evaluated on CIFAR-10 and ImageNet.
arXiv Detail & Related papers (2021-02-16T07:03:51Z) - Stochastic Markov Gradient Descent and Training Low-Bit Neural Networks [77.34726150561087]
We introduce Gradient Markov Descent (SMGD), a discrete optimization method applicable to training quantized neural networks.
We provide theoretical guarantees of algorithm performance as well as encouraging numerical results.
arXiv Detail & Related papers (2020-08-25T15:48:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.