Revisiting Gaussian Neurons for Online Clustering with Unknown Number of
Clusters
- URL: http://arxiv.org/abs/2205.00920v1
- Date: Mon, 2 May 2022 14:01:40 GMT
- Title: Revisiting Gaussian Neurons for Online Clustering with Unknown Number of
Clusters
- Authors: Ole Christian Eidheim
- Abstract summary: A novel local learning rule is presented that performs online clustering with a maximum limit of the number of cluster to be found.
The experimental results demonstrate stability in the learned parameters across a large number of training samples.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite the recent success of artificial neural networks, more biologically
plausible learning methods may be needed to resolve the weaknesses of
backpropagation trained models such as catastrophic forgetting and adversarial
attacks. A novel local learning rule is presented that performs online
clustering with a maximum limit of the number of cluster to be found rather
than a fixed cluster count. Instead of using orthogonal weight or output
activation constraints, activation sparsity is achieved by mutual repulsion of
lateral Gaussian neurons ensuring that multiple neuron centers cannot occupy
the same location in the input domain. An update method is also presented for
adjusting the widths of the Gaussian neurons in cases where the data samples
can be represented by means and variances. The algorithms were applied on the
MNIST and CIFAR-10 datasets to create filters capturing the input patterns of
pixel patches of various sizes. The experimental results demonstrate stability
in the learned parameters across a large number of training samples.
Related papers
- Residual Random Neural Networks [0.0]
Single-layer feedforward neural network with random weights is a recurring motif in the neural networks literature.
We show that one can obtain good classification results even if the number of hidden neurons has the same order of magnitude as the dimensionality of the data samples.
arXiv Detail & Related papers (2024-10-25T22:00:11Z) - Nonlinear subspace clustering by functional link neural networks [20.972039615938193]
Subspace clustering based on a feed-forward neural network has been demonstrated to provide better clustering accuracy than some advanced subspace clustering algorithms.
We employ a functional link neural network to transform data samples into a nonlinear domain.
We introduce a convex combination subspace clustering scheme, which combines a linear subspace clustering method with the functional link neural network subspace clustering approach.
arXiv Detail & Related papers (2024-02-03T06:01:21Z) - Benign Overfitting for Two-layer ReLU Convolutional Neural Networks [60.19739010031304]
We establish algorithm-dependent risk bounds for learning two-layer ReLU convolutional neural networks with label-flipping noise.
We show that, under mild conditions, the neural network trained by gradient descent can achieve near-zero training loss and Bayes optimal test risk.
arXiv Detail & Related papers (2023-03-07T18:59:38Z) - Compound Batch Normalization for Long-tailed Image Classification [77.42829178064807]
We propose a compound batch normalization method based on a Gaussian mixture.
It can model the feature space more comprehensively and reduce the dominance of head classes.
The proposed method outperforms existing methods on long-tailed image classification.
arXiv Detail & Related papers (2022-12-02T07:31:39Z) - Efficient and Robust Classification for Sparse Attacks [34.48667992227529]
We consider perturbations bounded by the $ell$--norm, which have been shown as effective attacks in the domains of image-recognition, natural language processing, and malware-detection.
We propose a novel defense method that consists of "truncation" and "adrial training"
Motivated by the insights we obtain, we extend these components to neural network classifiers.
arXiv Detail & Related papers (2022-01-23T21:18:17Z) - Convolutional generative adversarial imputation networks for
spatio-temporal missing data in storm surge simulations [86.5302150777089]
Generative Adversarial Imputation Nets (GANs) and GAN-based techniques have attracted attention as unsupervised machine learning methods.
We name our proposed method as Con Conval Generative Adversarial Imputation Nets (Conv-GAIN)
arXiv Detail & Related papers (2021-11-03T03:50:48Z) - The Separation Capacity of Random Neural Networks [78.25060223808936]
We show that a sufficiently large two-layer ReLU-network with standard Gaussian weights and uniformly distributed biases can solve this problem with high probability.
We quantify the relevant structure of the data in terms of a novel notion of mutual complexity.
arXiv Detail & Related papers (2021-07-31T10:25:26Z) - LocalDrop: A Hybrid Regularization for Deep Neural Networks [98.30782118441158]
We propose a new approach for the regularization of neural networks by the local Rademacher complexity called LocalDrop.
A new regularization function for both fully-connected networks (FCNs) and convolutional neural networks (CNNs) has been developed based on the proposed upper bound of the local Rademacher complexity.
arXiv Detail & Related papers (2021-03-01T03:10:11Z) - And/or trade-off in artificial neurons: impact on adversarial robustness [91.3755431537592]
Presence of sufficient number of OR-like neurons in a network can lead to classification brittleness and increased vulnerability to adversarial attacks.
We define AND-like neurons and propose measures to increase their proportion in the network.
Experimental results on the MNIST dataset suggest that our approach holds promise as a direction for further exploration.
arXiv Detail & Related papers (2021-02-15T08:19:05Z) - Local Extreme Learning Machines and Domain Decomposition for Solving
Linear and Nonlinear Partial Differential Equations [0.0]
We present a neural network-based method for solving linear and nonlinear partial differential equations.
The method combines the ideas of extreme learning machines (ELM), domain decomposition and local neural networks.
We compare the current method with the deep Galerkin method (DGM) and the physics-informed neural network (PINN) in terms of the accuracy and computational cost.
arXiv Detail & Related papers (2020-12-04T23:19:39Z) - NN-EVCLUS: Neural Network-based Evidential Clustering [6.713564212269253]
We introduce a neural-network based evidential clustering algorithm, called NN-EVCLUS.
It learns a mapping from attribute vectors to mass functions, in such a way that more similar inputs are mapped to output mass functions with a lower degree of conflict.
The network is trained to minimize the discrepancy between dissimilarities and degrees of conflict for all or some object pairs.
arXiv Detail & Related papers (2020-09-27T09:05:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.