Density peak clustering using tensor network
- URL: http://arxiv.org/abs/2302.00192v1
- Date: Wed, 1 Feb 2023 02:53:34 GMT
- Title: Density peak clustering using tensor network
- Authors: Xiao Shi, Yun Shang
- Abstract summary: We propose a density-based clustering algorithm inspired by tensor networks.
We evaluate the performance of our algorithm on six synthetic data sets, four real world data sets, and three commonly used computer vision data sets.
- Score: 5.726033349827603
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Tensor networks, which have been traditionally used to simulate many-body
physics, have recently gained significant attention in the field of machine
learning due to their powerful representation capabilities. In this work, we
propose a density-based clustering algorithm inspired by tensor networks. We
encode classical data into tensor network states on an extended Hilbert space
and train the tensor network states to capture the features of the clusters.
Here, we define density and related concepts in terms of fidelity, rather than
using a classical distance measure. We evaluate the performance of our
algorithm on six synthetic data sets, four real world data sets, and three
commonly used computer vision data sets. The results demonstrate that our
method provides state-of-the-art performance on several synthetic data sets and
real world data sets, even when the number of clusters is unknown.
Additionally, our algorithm performs competitively with state-of-the-art
algorithms on the MNIST, USPS, and Fashion-MNIST image data sets. These
findings reveal the great potential of tensor networks for machine learning
applications.
Related papers
- LSEnet: Lorentz Structural Entropy Neural Network for Deep Graph Clustering [59.89626219328127]
Graph clustering is a fundamental problem in machine learning.
Deep learning methods achieve the state-of-the-art results in recent years, but they still cannot work without predefined cluster numbers.
We propose to address this problem from a fresh perspective of graph information theory.
arXiv Detail & Related papers (2024-05-20T05:46:41Z) - GNN-LoFI: a Novel Graph Neural Network through Localized Feature-based
Histogram Intersection [51.608147732998994]
Graph neural networks are increasingly becoming the framework of choice for graph-based machine learning.
We propose a new graph neural network architecture that substitutes classical message passing with an analysis of the local distribution of node features.
arXiv Detail & Related papers (2024-01-17T13:04:23Z) - Assessing Neural Network Representations During Training Using
Noise-Resilient Diffusion Spectral Entropy [55.014926694758195]
Entropy and mutual information in neural networks provide rich information on the learning process.
We leverage data geometry to access the underlying manifold and reliably compute these information-theoretic measures.
We show that they form noise-resistant measures of intrinsic dimensionality and relationship strength in high-dimensional simulated data.
arXiv Detail & Related papers (2023-12-04T01:32:42Z) - Dive into Layers: Neural Network Capacity Bounding using Algebraic
Geometry [55.57953219617467]
We show that the learnability of a neural network is directly related to its size.
We use Betti numbers to measure the topological geometric complexity of input data and the neural network.
We perform the experiments on a real-world dataset MNIST and the results verify our analysis and conclusion.
arXiv Detail & Related papers (2021-09-03T11:45:51Z) - Bag of Tricks for Training Deeper Graph Neural Networks: A Comprehensive
Benchmark Study [100.27567794045045]
Training deep graph neural networks (GNNs) is notoriously hard.
We present the first fair and reproducible benchmark dedicated to assessing the "tricks" of training deep GNNs.
arXiv Detail & Related papers (2021-08-24T05:00:37Z) - Tensor networks and efficient descriptions of classical data [0.9176056742068814]
We study how the mutual information between a subregion and its complement scales with the subsystem size $L$.
We find that for text, the mutual information scales as a power law $Lnu$ with a close to volume law exponent.
For images, the scaling is close to an area law, hinting at 2D tensor networks such as PEPS could have an adequate expressibility.
arXiv Detail & Related papers (2021-03-11T18:57:16Z) - Mutual Information Scaling for Tensor Network Machine Learning [0.0]
We show how a related correlation analysis can be applied to tensor network machine learning.
We explore whether classical data possess correlation scaling patterns similar to those found in quantum states.
We characterize the scaling patterns in the MNIST and Tiny Images datasets, and find clear evidence of boundary-law scaling in the latter.
arXiv Detail & Related papers (2021-02-27T02:17:51Z) - Dynamic Graph: Learning Instance-aware Connectivity for Neural Networks [78.65792427542672]
Dynamic Graph Network (DG-Net) is a complete directed acyclic graph, where the nodes represent convolutional blocks and the edges represent connection paths.
Instead of using the same path of the network, DG-Net aggregates features dynamically in each node, which allows the network to have more representation ability.
arXiv Detail & Related papers (2020-10-02T16:50:26Z) - Locally orderless tensor networks for classifying two- and
three-dimensional medical images [0.3867363075280544]
We improve upon the matrix product state (MPS) tensor networks that can operate on one-dimensional vectors.
We treat small image regions as orderless, squeeze their spatial information into feature dimensions and then perform MPS operations on these locally orderless regions.
The architecture of LoTeNet is fixed in all experiments and we show it requires lesser computational resources to attain performance on par or superior to the compared methods.
arXiv Detail & Related papers (2020-09-25T15:05:02Z) - Interpretable Visualizations with Differentiating Embedding Networks [0.0]
We present a visualization algorithm based on a novel unsupervised Siamese neural network training regime and loss function, called Differentiating Embedding Networks (DEN)
The Siamese neural network finds differentiating or similar features between specific pairs of samples in a dataset, and uses these features to embed the dataset in a lower dimensional space where it can be visualized.
To interpret DEN, we create an end-to-end parametric clustering algorithm on top of the visualization, and then leverage SHAP scores to determine which features in the sample space are important for understanding the structures shown in the visualization based on the clusters found.
arXiv Detail & Related papers (2020-06-11T17:30:44Z) - Anomaly Detection with Tensor Networks [2.3895981099137535]
We exploit the memory and computational efficiency of tensor networks to learn a linear transformation over a space with a dimension exponential in the number of original features.
We produce competitive results on image datasets, despite not exploiting the locality of images.
arXiv Detail & Related papers (2020-06-03T20:41:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.