q-RBFNN:A Quantum Calculus-based RBF Neural Network
- URL: http://arxiv.org/abs/2106.01370v1
- Date: Wed, 2 Jun 2021 08:27:12 GMT
- Title: q-RBFNN:A Quantum Calculus-based RBF Neural Network
- Authors: Syed Saiq Hussain, Muhammad Usman, Taha Hasan Masood Siddique, Imran
Naseem, Roberto Togneri, Mohammed Bennamoun
- Abstract summary: A gradient descent based learning approach for the radial basis function neural networks (RBFNN) is proposed.
The proposed method is based on the q-gradient which is also known as Jackson derivative.
The proposed $q$-RBFNN is analyzed for its convergence performance in the context of least square algorithm.
- Score: 31.14412266444568
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this research a novel stochastic gradient descent based learning approach
for the radial basis function neural networks (RBFNN) is proposed. The proposed
method is based on the q-gradient which is also known as Jackson derivative. In
contrast to the conventional gradient, which finds the tangent, the q-gradient
finds the secant of the function and takes larger steps towards the optimal
solution. The proposed $q$-RBFNN is analyzed for its convergence performance in
the context of least square algorithm. In particular, a closed form expression
of the Wiener solution is obtained, and stability bounds of the learning rate
(step-size) is derived. The analytical results are validated through computer
simulation. Additionally, we propose an adaptive technique for the time-varying
$q$-parameter to improve convergence speed with no trade-offs in the steady
state performance.
Related papers
- A Mean-Field Analysis of Neural Stochastic Gradient Descent-Ascent for Functional Minimax Optimization [90.87444114491116]
This paper studies minimax optimization problems defined over infinite-dimensional function classes of overparametricized two-layer neural networks.
We address (i) the convergence of the gradient descent-ascent algorithm and (ii) the representation learning of the neural networks.
Results show that the feature representation induced by the neural networks is allowed to deviate from the initial one by the magnitude of $O(alpha-1)$, measured in terms of the Wasserstein distance.
arXiv Detail & Related papers (2024-04-18T16:46:08Z) - Stochastic Gradient Descent for Gaussian Processes Done Right [86.83678041846971]
We show that when emphdone right -- by which we mean using specific insights from optimisation and kernel communities -- gradient descent is highly effective.
We introduce a emphstochastic dual descent algorithm, explain its design in an intuitive manner and illustrate the design choices.
Our method places Gaussian process regression on par with state-of-the-art graph neural networks for molecular binding affinity prediction.
arXiv Detail & Related papers (2023-10-31T16:15:13Z) - Approximation Results for Gradient Descent trained Neural Networks [0.0]
The networks are fully connected constant depth increasing width.
The continuous kernel error norm implies an approximation under the natural smoothness assumption required for smooth functions.
arXiv Detail & Related papers (2023-09-09T18:47:55Z) - Implicit Stochastic Gradient Descent for Training Physics-informed
Neural Networks [51.92362217307946]
Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems.
PINNs are trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features.
In this paper, we propose to employ implicit gradient descent (ISGD) method to train PINNs for improving the stability of training process.
arXiv Detail & Related papers (2023-03-03T08:17:47Z) - Neural Basis Functions for Accelerating Solutions to High Mach Euler
Equations [63.8376359764052]
We propose an approach to solving partial differential equations (PDEs) using a set of neural networks.
We regress a set of neural networks onto a reduced order Proper Orthogonal Decomposition (POD) basis.
These networks are then used in combination with a branch network that ingests the parameters of the prescribed PDE to compute a reduced order approximation to the PDE.
arXiv Detail & Related papers (2022-08-02T18:27:13Z) - A Differentiable Point Process with Its Application to Spiking Neural
Networks [13.160616423673373]
Jimenez Rezende & Gerstner (2014) proposed a variational inference algorithm to train SNNs with hidden neurons.
This paper presents an alternative gradient estimator for SNNs based on the path-wise gradient estimator.
arXiv Detail & Related papers (2021-06-02T02:40:17Z) - Optimal Rates for Averaged Stochastic Gradient Descent under Neural
Tangent Kernel Regime [50.510421854168065]
We show that the averaged gradient descent can achieve the minimax optimal convergence rate.
We show that the target function specified by the NTK of a ReLU network can be learned at the optimal convergence rate.
arXiv Detail & Related papers (2020-06-22T14:31:37Z) - Stable Neural Flows [15.318500611972441]
We introduce a provably stable variant of neural ordinary differential equations (neural ODEs) whose trajectories evolve on an energy functional parametrised by a neural network.
The learning procedure is cast as an optimal control problem, and an approximate solution is proposed based on adjoint sensivity analysis.
arXiv Detail & Related papers (2020-03-18T06:27:21Z) - Towards Better Understanding of Adaptive Gradient Algorithms in
Generative Adversarial Nets [71.05306664267832]
Adaptive algorithms perform gradient updates using the history of gradients and are ubiquitous in training deep neural networks.
In this paper we analyze a variant of OptimisticOA algorithm for nonconcave minmax problems.
Our experiments show that adaptive GAN non-adaptive gradient algorithms can be observed empirically.
arXiv Detail & Related papers (2019-12-26T22:10:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.