Clustering with Neural Network and Index
- URL: http://arxiv.org/abs/2212.03853v5
- Date: Sat, 29 Jul 2023 09:00:14 GMT
- Title: Clustering with Neural Network and Index
- Authors: Gangli Liu
- Abstract summary: A new model called Clustering with Neural Network and Index (CNNI) is introduced.
CNNI uses a Neural Network to cluster data points, with an internal clustering evaluation index acting as the loss function.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A new model called Clustering with Neural Network and Index (CNNI) is
introduced. CNNI uses a Neural Network to cluster data points. Training of the
Neural Network mimics supervised learning, with an internal clustering
evaluation index acting as the loss function. An experiment is conducted to
test the feasibility of the new model, and compared with results of other
clustering models like K-means and Gaussian Mixture Model (GMM). The result
shows CNNI can work properly for clustering data; CNNI equipped with MMJ-SC,
achieves the first parametric (inductive) clustering model that can deal with
non-convex shaped (non-flat geometry) data.
Related papers
- Subgraph Clustering and Atom Learning for Improved Image Classification [4.499833362998488]
We present the Graph Sub-Graph Network (GSN), a novel hybrid image classification model merging the strengths of Convolutional Neural Networks (CNNs) for feature extraction and Graph Neural Networks (GNNs) for structural modeling.
GSN employs k-means clustering to group graph nodes into clusters, facilitating the creation of subgraphs.
These subgraphs are then utilized to learn representative atoms for dictionary learning, enabling the identification of sparse, class-distinguishable features.
arXiv Detail & Related papers (2024-07-20T06:32:00Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Dink-Net: Neural Clustering on Large Graphs [59.10189693120368]
A deep graph clustering method (Dink-Net) is proposed with the idea of dilation and shrink.
By discriminating nodes, whether being corrupted by augmentations, representations are learned in a self-supervised manner.
The clustering distribution is optimized by minimizing the proposed cluster dilation loss and cluster shrink loss.
Compared to the runner-up, Dink-Net 9.62% achieves NMI improvement on the ogbn-papers100M dataset with 111 million nodes and 1.6 billion edges.
arXiv Detail & Related papers (2023-05-28T15:33:24Z) - Spiking neural networks with Hebbian plasticity for unsupervised
representation learning [0.0]
We introduce a novel spiking neural network model for learning distributed internal representations from data in an unsupervised procedure.
We employ an online correlation-based Hebbian-Bayesian learning and rewiring mechanism, shown previously to perform representation learning, into a spiking neural network.
We show performance close to the non-spiking BCPNN, and competitive with other Hebbian-based spiking networks when trained on MNIST and F-MNIST machine learning benchmarks.
arXiv Detail & Related papers (2023-05-05T22:34:54Z) - Analyzing Populations of Neural Networks via Dynamical Model Embedding [10.455447557943463]
A core challenge in the interpretation of deep neural networks is identifying commonalities between the underlying algorithms implemented by distinct networks trained for the same task.
Motivated by this problem, we introduce DYNAMO, an algorithm that constructs low-dimensional manifold where each point corresponds to a neural network model, and two points are nearby if the corresponding neural networks enact similar high-level computational processes.
DYNAMO takes as input a collection of pre-trained neural networks and outputs a meta-model that emulates the dynamics of the hidden states as well as the outputs of any model in the collection.
arXiv Detail & Related papers (2023-02-27T19:00:05Z) - Learning Hierarchical Graph Neural Networks for Image Clustering [81.5841862489509]
We propose a hierarchical graph neural network (GNN) model that learns how to cluster a set of images into an unknown number of identities.
Our hierarchical GNN uses a novel approach to merge connected components predicted at each level of the hierarchy to form a new graph at the next level.
arXiv Detail & Related papers (2021-07-03T01:28:42Z) - Local Critic Training for Model-Parallel Learning of Deep Neural
Networks [94.69202357137452]
We propose a novel model-parallel learning method, called local critic training.
We show that the proposed approach successfully decouples the update process of the layer groups for both convolutional neural networks (CNNs) and recurrent neural networks (RNNs)
We also show that trained networks by the proposed method can be used for structural optimization.
arXiv Detail & Related papers (2021-02-03T09:30:45Z) - Statistical model-based evaluation of neural networks [74.10854783437351]
We develop an experimental setup for the evaluation of neural networks (NNs)
The setup helps to benchmark a set of NNs vis-a-vis minimum-mean-square-error (MMSE) performance bounds.
This allows us to test the effects of training data size, data dimension, data geometry, noise, and mismatch between training and testing conditions.
arXiv Detail & Related papers (2020-11-18T00:33:24Z) - NN-EVCLUS: Neural Network-based Evidential Clustering [6.713564212269253]
We introduce a neural-network based evidential clustering algorithm, called NN-EVCLUS.
It learns a mapping from attribute vectors to mass functions, in such a way that more similar inputs are mapped to output mass functions with a lower degree of conflict.
The network is trained to minimize the discrepancy between dissimilarities and degrees of conflict for all or some object pairs.
arXiv Detail & Related papers (2020-09-27T09:05:41Z) - Neural Networks Enhancement with Logical Knowledge [83.9217787335878]
We propose an extension of KENN for relational data.
The results show that KENN is capable of increasing the performances of the underlying neural network even in the presence relational data.
arXiv Detail & Related papers (2020-09-13T21:12:20Z) - Dissimilarity Mixture Autoencoder for Deep Clustering [0.0]
The dissimilarity mixture autoencoder (DMAE) is a neural network model for feature-based clustering.
DMAE can be integrated with deep learning architectures into end-to-end models.
arXiv Detail & Related papers (2020-06-15T07:08:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.