A new distance measurement and its application in K-Means Algorithm
- URL: http://arxiv.org/abs/2206.05215v1
- Date: Fri, 10 Jun 2022 16:26:22 GMT
- Title: A new distance measurement and its application in K-Means Algorithm
- Authors: Yiqun Zhang and Houbiao Li
- Abstract summary: K-Means clustering algorithm based on Euclidean distance only pays attention to the linear distance between samples.
We propose a new distance measurement, namely, view-distance, and apply it to the K-Means algorithm.
The experimental results show that, on most datasets, the K-Means algorithm based on view-distance has a certain degree of improvement in classification accuracy and clustering effect.
- Score: 7.168628921229442
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: K-Means clustering algorithm is one of the most commonly used clustering
algorithms because of its simplicity and efficiency. K-Means clustering
algorithm based on Euclidean distance only pays attention to the linear
distance between samples, but ignores the overall distribution structure of the
dataset (i.e. the fluid structure of dataset). Since it is difficult to
describe the internal structure of two data points by Euclidean distance in
high-dimensional data space, we propose a new distance measurement, namely,
view-distance, and apply it to the K-Means algorithm. On the classical manifold
learning datasets, S-curve and Swiss roll datasets, not only this new distance
can cluster the data according to the structure of the data itself, but also
the boundaries between categories are neat dividing lines. Moreover, we also
tested the classification accuracy and clustering effect of the K-Means
algorithm based on view-distance on some real-world datasets. The experimental
results show that, on most datasets, the K-Means algorithm based on
view-distance has a certain degree of improvement in classification accuracy
and clustering effect.
Related papers
- K-Means Clustering With Incomplete Data with the Use of Mahalanobis Distances [0.0]
We develop a unified K-means algorithm that incorporates Mahalanobis distances, instead of the traditional Euclidean distances.
We demonstrate that our algorithm consistently outperforms both standalone imputation followed by K-means.
These results hold across both the IRIS dataset and randomly generated data with elliptical clusters.
arXiv Detail & Related papers (2024-10-31T00:05:09Z) - Self-Supervised Graph Embedding Clustering [70.36328717683297]
K-means one-step dimensionality reduction clustering method has made some progress in addressing the curse of dimensionality in clustering tasks.
We propose a unified framework that integrates manifold learning with K-means, resulting in the self-supervised graph embedding framework.
arXiv Detail & Related papers (2024-09-24T08:59:51Z) - Fuzzy K-Means Clustering without Cluster Centroids [21.256564324236333]
Fuzzy K-Means clustering is a critical technique in unsupervised data analysis.
This paper proposes a novel Fuzzy textitK-Means clustering algorithm that entirely eliminates the reliance on cluster centroids.
arXiv Detail & Related papers (2024-04-07T12:25:03Z) - Distribution-Based Trajectory Clustering [14.781854651899705]
Trajectory clustering enables the discovery of common patterns in trajectory data.
The distance measures employed have two challenges: high computational cost and low fidelity.
We propose to use a recent Isolation Distributional Kernel (IDK) as the main tool to meet all three challenges.
arXiv Detail & Related papers (2023-10-08T11:28:34Z) - Rethinking k-means from manifold learning perspective [122.38667613245151]
We present a new clustering algorithm which directly detects clusters of data without mean estimation.
Specifically, we construct distance matrix between data points by Butterworth filter.
To well exploit the complementary information embedded in different views, we leverage the tensor Schatten p-norm regularization.
arXiv Detail & Related papers (2023-05-12T03:01:41Z) - Linearized Wasserstein dimensionality reduction with approximation
guarantees [65.16758672591365]
LOT Wassmap is a computationally feasible algorithm to uncover low-dimensional structures in the Wasserstein space.
We show that LOT Wassmap attains correct embeddings and that the quality improves with increased sample size.
We also show how LOT Wassmap significantly reduces the computational cost when compared to algorithms that depend on pairwise distance computations.
arXiv Detail & Related papers (2023-02-14T22:12:16Z) - A density peaks clustering algorithm with sparse search and K-d tree [16.141611031128427]
Density peaks clustering algorithm with sparse search and K-d tree is developed to solve this problem.
Experiments are carried out on datasets with different distribution characteristics, by comparing with other five typical clustering algorithms.
arXiv Detail & Related papers (2022-03-02T09:29:40Z) - Riemannian classification of EEG signals with missing values [67.90148548467762]
This paper proposes two strategies to handle missing data for the classification of electroencephalograms.
The first approach estimates the covariance from imputed data with the $k$-nearest neighbors algorithm; the second relies on the observed data by leveraging the observed-data likelihood within an expectation-maximization algorithm.
As results show, the proposed strategies perform better than the classification based on observed data and allow to keep a high accuracy even when the missing data ratio increases.
arXiv Detail & Related papers (2021-10-19T14:24:50Z) - Robust Trimmed k-means [70.88503833248159]
We propose Robust Trimmed k-means (RTKM) that simultaneously identifies outliers and clusters points.
We show RTKM performs competitively with other methods on single membership data with outliers and multi-membership data without outliers.
arXiv Detail & Related papers (2021-08-16T15:49:40Z) - Differentially Private Clustering: Tight Approximation Ratios [57.89473217052714]
We give efficient differentially private algorithms for basic clustering problems.
Our results imply an improved algorithm for the Sample and Aggregate privacy framework.
One of the tools used in our 1-Cluster algorithm can be employed to get a faster quantum algorithm for ClosestPair in a moderate number of dimensions.
arXiv Detail & Related papers (2020-08-18T16:22:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.