Kernel similarity matching with Hebbian neural networks
- URL: http://arxiv.org/abs/2204.07475v1
- Date: Fri, 15 Apr 2022 14:21:53 GMT
- Title: Kernel similarity matching with Hebbian neural networks
- Authors: Kyle Luther, H. Sebastian Seung
- Abstract summary: Recent works have derived neural networks with online correlation-based learning rules to perform kernel similarity matching.
Our algorithm proceeds by deriving and then minimizing an upper bound for the sum of squared errors between output and input kernel similarities.
In addition to generating high-dimensional linearly separable representations, we show that our upper bound naturally yields representations which are sparse and selective for specific input patterns.
- Score: 4.209801809583906
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent works have derived neural networks with online correlation-based
learning rules to perform \textit{kernel similarity matching}. These works
applied existing linear similarity matching algorithms to nonlinear features
generated with random Fourier methods. In this paper attempt to perform kernel
similarity matching by directly learning the nonlinear features. Our algorithm
proceeds by deriving and then minimizing an upper bound for the sum of squared
errors between output and input kernel similarities. The construction of our
upper bound leads to online correlation-based learning rules which can be
implemented with a 1 layer recurrent neural network. In addition to generating
high-dimensional linearly separable representations, we show that our upper
bound naturally yields representations which are sparse and selective for
specific input patterns. We compare the approximation quality of our method to
neural random Fourier method and variants of the popular but non-biological
"Nystr{\"o}m" method for approximating the kernel matrix. Our method appears to
be comparable or better than randomly sampled Nystr{\"o}m methods when the
outputs are relatively low dimensional (although still potentially higher
dimensional than the inputs) but less faithful when the outputs are very high
dimensional.
Related papers
- Nonlinear subspace clustering by functional link neural networks [22.49976785146764]
Subspace clustering based on a feed-forward neural network has been demonstrated to provide better clustering accuracy than some advanced subspace clustering algorithms.
We employ a functional link neural network to transform data samples into a nonlinear domain.
We introduce a convex combination subspace clustering scheme, which combines a linear subspace clustering method with the functional link neural network subspace clustering approach.
arXiv Detail & Related papers (2024-02-03T06:01:21Z) - NeuralEF: Deconstructing Kernels by Deep Neural Networks [47.54733625351363]
Traditional nonparametric solutions based on the Nystr"om formula suffer from scalability issues.
Recent work has resorted to a parametric approach, i.e., training neural networks to approximate the eigenfunctions.
We show that these problems can be fixed by using a new series of objective functions that generalizes to space of supervised and unsupervised learning problems.
arXiv Detail & Related papers (2022-04-30T05:31:07Z) - Towards Neural Sparse Linear Solvers [0.0]
We propose neural sparse linear solvers to learn approximate solvers for sparse symmetric linear systems.
Our method relies on representing a sparse symmetric linear system as an undirected weighted graph.
We test sparse linear solvers on static linear analysis problems from structural engineering.
arXiv Detail & Related papers (2022-03-14T09:17:02Z) - Neural Fields as Learnable Kernels for 3D Reconstruction [101.54431372685018]
We present a novel method for reconstructing implicit 3D shapes based on a learned kernel ridge regression.
Our technique achieves state-of-the-art results when reconstructing 3D objects and large scenes from sparse oriented points.
arXiv Detail & Related papers (2021-11-26T18:59:04Z) - Unfolding Projection-free SDP Relaxation of Binary Graph Classifier via
GDPA Linearization [59.87663954467815]
Algorithm unfolding creates an interpretable and parsimonious neural network architecture by implementing each iteration of a model-based algorithm as a neural layer.
In this paper, leveraging a recent linear algebraic theorem called Gershgorin disc perfect alignment (GDPA), we unroll a projection-free algorithm for semi-definite programming relaxation (SDR) of a binary graph.
Experimental results show that our unrolled network outperformed pure model-based graph classifiers, and achieved comparable performance to pure data-driven networks but using far fewer parameters.
arXiv Detail & Related papers (2021-09-10T07:01:15Z) - The Separation Capacity of Random Neural Networks [78.25060223808936]
We show that a sufficiently large two-layer ReLU-network with standard Gaussian weights and uniformly distributed biases can solve this problem with high probability.
We quantify the relevant structure of the data in terms of a novel notion of mutual complexity.
arXiv Detail & Related papers (2021-07-31T10:25:26Z) - Random Features for the Neural Tangent Kernel [57.132634274795066]
We propose an efficient feature map construction of the Neural Tangent Kernel (NTK) of fully-connected ReLU network.
We show that dimension of the resulting features is much smaller than other baseline feature map constructions to achieve comparable error bounds both in theory and practice.
arXiv Detail & Related papers (2021-04-03T09:08:12Z) - Hybrid Trilinear and Bilinear Programming for Aligning Partially
Overlapping Point Sets [85.71360365315128]
In many applications, we need algorithms which can align partially overlapping point sets are invariant to the corresponding corresponding RPM algorithm.
We first show that the objective is a cubic bound function. We then utilize the convex envelopes of trilinear and bilinear monomial transformations to derive its lower bound.
We next develop a branch-and-bound (BnB) algorithm which only branches over the transformation variables and runs efficiently.
arXiv Detail & Related papers (2021-01-19T04:24:23Z) - Efficient Nonlinear RX Anomaly Detectors [7.762712532657168]
We propose two families of techniques to improve the efficiency of the standard kernel Reed-Xiaoli (RX) method for anomaly detection.
We show that the proposed efficient methods have a lower computational cost and they perform similar (or outperform) the standard kernel RX algorithm.
arXiv Detail & Related papers (2020-12-07T21:57:54Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z) - A Neural Network Approach for Online Nonlinear Neyman-Pearson
Classification [3.6144103736375857]
We propose a novel Neyman-Pearson (NP) classifier that is both online and nonlinear as the first time in the literature.
The proposed classifier operates on a binary labeled data stream in an online manner, and maximizes the detection power about a user-specified and controllable false positive rate.
Our algorithm is appropriate for large scale data applications and provides a decent false positive rate controllability with real time processing.
arXiv Detail & Related papers (2020-06-14T20:00:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.