Quantum Sinusoidal Neural Networks
- URL: http://arxiv.org/abs/2410.22016v2
- Date: Sun, 29 Jun 2025 10:03:44 GMT
- Title: Quantum Sinusoidal Neural Networks
- Authors: Zujin Wen, Jin-Long Huang, Oscar Dahlsten,
- Abstract summary: We design a quantum version of neural networks with sinusoidal activation functions.<n>We compare its performance to the classical case.<n>We build a quantum optimization algorithm around the quantum sine circuit.
- Score: 0.6021787236982659
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We design a quantum version of neural networks with sinusoidal activation functions and compare its performance to the classical case. We create a general quantum sine circuit implementing a discretised sinusoidal activation function. Along the way, we define a classical discrete sinusoidal neural network. We build a quantum optimization algorithm around the quantum sine circuit, combining quantum search and phase estimation. This algorithm is guaranteed to find the weights with global minimum loss on the training data. We give a computational complexity analysis and demonstrate the algorithm in an example. We compare the performance with that of the standard gradient descent training method for classical sinusoidal neural networks. We show that (i) the standard classical training method typically leads to bad local minima in terms of mean squared error on test data and (ii) the weights that perform best on the training data generalise well to the test data. Points (i) and (ii) motivate using the quantum training algorithm, which is guaranteed to find the best weights on the training data.
Related papers
- Quantum-Enhanced Weight Optimization for Neural Networks Using Grover's Algorithm [0.0]
We propose to use quantum computing in order to optimize the weights of a classical NN.
We design an instance of Grover's quantum search algorithm to accelerate the search for the optimal parameters of an NN.
Our method requires a much smaller number of qubits compared to other QNN approaches.
arXiv Detail & Related papers (2025-04-20T10:59:04Z) - Newton Losses: Using Curvature Information for Learning with Differentiable Algorithms [80.37846867546517]
We show how to train eight different neural networks with custom objectives.
We exploit their second-order information via their empirical Fisherssian matrices.
We apply Loss Lossiable algorithms to achieve significant improvements for less differentiable algorithms.
arXiv Detail & Related papers (2024-10-24T18:02:11Z) - Efficient Learning for Linear Properties of Bounded-Gate Quantum Circuits [63.733312560668274]
Given a quantum circuit containing d tunable RZ gates and G-d Clifford gates, can a learner perform purely classical inference to efficiently predict its linear properties?
We prove that the sample complexity scaling linearly in d is necessary and sufficient to achieve a small prediction error, while the corresponding computational complexity may scale exponentially in d.
We devise a kernel-based learning model capable of trading off prediction error and computational complexity, transitioning from exponential to scaling in many practical settings.
arXiv Detail & Related papers (2024-08-22T08:21:28Z) - Training Artificial Neural Networks by Coordinate Search Algorithm [0.20971479389679332]
We propose an efficient version of the gradient-free Coordinate Search (CS) algorithm for training neural networks.
The proposed algorithm can be used with non-differentiable activation functions and tailored to multi-objective/multi-loss problems.
Finding the optimal values for weights of ANNs is a large-scale optimization problem.
arXiv Detail & Related papers (2024-02-20T01:47:25Z) - Challenges and opportunities in the supervised learning of quantum
circuit outputs [0.0]
Deep neural networks have proven capable of predicting some output properties of relevant random quantum circuits.
We investigate if and to what extent neural networks can learn to predict the output expectation values of circuits often employed in variational quantum algorithms.
arXiv Detail & Related papers (2024-02-07T16:10:13Z) - Learning To Optimize Quantum Neural Network Without Gradients [3.9848482919377006]
We introduce a novel meta-optimization algorithm that trains a emphmeta-optimizer network to output parameters for the quantum circuit.
We show that we achieve a better quality minima in fewer circuit evaluations than existing gradient based algorithms on different datasets.
arXiv Detail & Related papers (2023-04-15T01:09:12Z) - Globally Optimal Training of Neural Networks with Threshold Activation
Functions [63.03759813952481]
We study weight decay regularized training problems of deep neural networks with threshold activations.
We derive a simplified convex optimization formulation when the dataset can be shattered at a certain layer of the network.
arXiv Detail & Related papers (2023-03-06T18:59:13Z) - Implicit Stochastic Gradient Descent for Training Physics-informed
Neural Networks [51.92362217307946]
Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems.
PINNs are trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features.
In this paper, we propose to employ implicit gradient descent (ISGD) method to train PINNs for improving the stability of training process.
arXiv Detail & Related papers (2023-03-03T08:17:47Z) - Tensor Networks or Decision Diagrams? Guidelines for Classical Quantum
Circuit Simulation [65.93830818469833]
tensor networks and decision diagrams have independently been developed with differing perspectives, terminologies, and backgrounds in mind.
We consider how these techniques approach classical quantum circuit simulation, and examine their (dis)similarities with regard to their most applicable abstraction level.
We provide guidelines for when to better use tensor networks and when to better use decision diagrams in classical quantum circuit simulation.
arXiv Detail & Related papers (2023-02-13T19:00:00Z) - Quantum Methods for Neural Networks and Application to Medical Image
Classification [5.817995726696436]
We introduce two new quantum methods for neural networks.
The first is a quantum orthogonal neural network, which is based on a quantum pyramidal circuit.
The second method is quantum-assisted neural networks, where a quantum computer is used to perform inner product estimation.
arXiv Detail & Related papers (2022-12-14T18:17:19Z) - Accelerating the training of single-layer binary neural networks using
the HHL quantum algorithm [58.720142291102135]
We show that useful information can be extracted from the quantum-mechanical implementation of Harrow-Hassidim-Lloyd (HHL)
This paper shows, however, that useful information can be extracted from the quantum-mechanical implementation of HHL, and used to reduce the complexity of finding the solution on the classical side.
arXiv Detail & Related papers (2022-10-23T11:58:05Z) - Navigating Local Minima in Quantized Spiking Neural Networks [3.1351527202068445]
Spiking and Quantized Neural Networks (NNs) are becoming exceedingly important for hyper-efficient implementations of Deep Learning (DL) algorithms.
These networks face challenges when trained using error backpropagation, due to the absence of gradient signals when applying hard thresholds.
This paper presents a systematic evaluation of a cosine-annealed LR schedule coupled with weight-independent adaptive moment estimation.
arXiv Detail & Related papers (2022-02-15T06:42:25Z) - Why Lottery Ticket Wins? A Theoretical Perspective of Sample Complexity
on Pruned Neural Networks [79.74580058178594]
We analyze the performance of training a pruned neural network by analyzing the geometric structure of the objective function.
We show that the convex region near a desirable model with guaranteed generalization enlarges as the neural network model is pruned.
arXiv Detail & Related papers (2021-10-12T01:11:07Z) - A quantum algorithm for training wide and deep classical neural networks [72.2614468437919]
We show that conditions amenable to classical trainability via gradient descent coincide with those necessary for efficiently solving quantum linear systems.
We numerically demonstrate that the MNIST image dataset satisfies such conditions.
We provide empirical evidence for $O(log n)$ training of a convolutional neural network with pooling.
arXiv Detail & Related papers (2021-07-19T23:41:03Z) - Efficient training of physics-informed neural networks via importance
sampling [2.9005223064604078]
Physics-In Neural Networks (PINNs) are a class of deep neural networks that are trained to compute systems governed by partial differential equations (PDEs)
We show that an importance sampling approach will improve the convergence behavior of PINNs training.
arXiv Detail & Related papers (2021-04-26T02:45:10Z) - GradFreeBits: Gradient Free Bit Allocation for Dynamic Low Precision
Neural Networks [4.511923587827301]
Quantized neural networks (QNNs) are among the main approaches for deploying deep neural networks on low resource edge devices.
We propose GradFreeBits: a novel joint optimization scheme for training dynamic QNNs.
Our method achieves better or on par performance with current state of the art low precision neural networks on CIFAR10/100 and ImageNet classification.
arXiv Detail & Related papers (2021-02-18T12:18:09Z) - SiMaN: Sign-to-Magnitude Network Binarization [165.5630656849309]
We show that our weight binarization provides an analytical solution by encoding high-magnitude weights into +1s, and 0s otherwise.
We prove that the learned weights of binarized networks roughly follow a Laplacian distribution that does not allow entropy.
Our method, dubbed sign-to- neural network binarization (SiMaN), is evaluated on CIFAR-10 and ImageNet.
arXiv Detail & Related papers (2021-02-16T07:03:51Z) - Learning temporal data with variational quantum recurrent neural network [0.5658123802733283]
We propose a method for learning temporal data using a parametrized quantum circuit.
This work provides a way to exploit complex quantum dynamics for learning temporal data.
arXiv Detail & Related papers (2020-12-21T10:47:28Z) - Stochastic Markov Gradient Descent and Training Low-Bit Neural Networks [77.34726150561087]
We introduce Gradient Markov Descent (SMGD), a discrete optimization method applicable to training quantized neural networks.
We provide theoretical guarantees of algorithm performance as well as encouraging numerical results.
arXiv Detail & Related papers (2020-08-25T15:48:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.