DBSCAN of Multi-Slice Clustering for Third-Order Tensors
- URL: http://arxiv.org/abs/2303.07768v3
- Date: Fri, 24 Mar 2023 09:57:25 GMT
- Title: DBSCAN of Multi-Slice Clustering for Third-Order Tensors
- Authors: Dina Faneva Andriantsiory, Joseph Ben Geloun, Mustapha Lebbah
- Abstract summary: We propose an extension algorithm called MSC-DBSCAN to extract the different clusters of slices that lie in the different subspaces from the data.
Our algorithm uses the same input as the MSC algorithm and can find the same solution for rank-one tensor data as MSC.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Several methods for triclustering three-dimensional data require the cluster
size or the number of clusters in each dimension to be specified. To address
this issue, the Multi-Slice Clustering (MSC) for 3-order tensor finds signal
slices that lie in a low dimensional subspace for a rank-one tensor dataset in
order to find a cluster based on the threshold similarity. We propose an
extension algorithm called MSC-DBSCAN to extract the different clusters of
slices that lie in the different subspaces from the data if the dataset is a
sum of r rank-one tensor (r > 1). Our algorithm uses the same input as the MSC
algorithm and can find the same solution for rank-one tensor data as MSC.
Related papers
- Clustering Based on Density Propagation and Subcluster Merging [92.15924057172195]
We propose a density-based node clustering approach that automatically determines the number of clusters and can be applied in both data space and graph space.
Unlike traditional density-based clustering methods, which necessitate calculating the distance between any two nodes, our proposed technique determines density through a propagation process.
arXiv Detail & Related papers (2024-11-04T04:09:36Z) - Multilayer Graph Approach to Deep Subspace Clustering [0.0]
Deep subspace clustering (DSC) networks based on self-expressive model learn representation matrix, often implemented in terms of fully connected network.
Here, we apply selected linear subspace clustering algorithm to learn representation from representations learned by all layers of encoder network including the input data.
We validate proposed approach on four well-known datasets with two DSC networks as baseline models.
arXiv Detail & Related papers (2024-01-30T14:09:41Z) - Parallel Computation of Multi-Slice Clustering of Third-Order Tensors [0.08192907805418585]
We devise parallel algorithms to compute the Multi-Slice Clustering (MSC) for 3rd-order tensors.
We show that our parallel scheme outperforms sequential computing and allows for the scalability of the MSC method.
arXiv Detail & Related papers (2023-09-29T16:38:51Z) - Instance-Optimal Cluster Recovery in the Labeled Stochastic Block Model [79.46465138631592]
We devise an efficient algorithm that recovers clusters using the observed labels.
We present Instance-Adaptive Clustering (IAC), the first algorithm whose performance matches these lower bounds both in expectation and with high probability.
arXiv Detail & Related papers (2023-06-18T08:46:06Z) - Rethinking k-means from manifold learning perspective [122.38667613245151]
We present a new clustering algorithm which directly detects clusters of data without mean estimation.
Specifically, we construct distance matrix between data points by Butterworth filter.
To well exploit the complementary information embedded in different views, we leverage the tensor Schatten p-norm regularization.
arXiv Detail & Related papers (2023-05-12T03:01:41Z) - Multiway clustering of 3-order tensor via affinity matrix [0.0]
We propose a new method of multiway clustering for 3-order tensors via affinity matrix (MCAM)
Based on a notion of similarity between the tensor slices and the spread of information of each slice, our model builds an affinity/similarity matrix on which we apply advanced clustering methods.
MCAM achieves competitive results compared with other known algorithms on synthetics and real datasets.
arXiv Detail & Related papers (2023-03-14T10:02:52Z) - ck-means, a novel unsupervised learning method that combines fuzzy and
crispy clustering methods to extract intersecting data [1.827510863075184]
This paper proposes a method to cluster data that share the same intersections between two features or more.
The main idea of this novel method is to generate fuzzy clusters of data using a Fuzzy C-Means (FCM) algorithm.
The algorithm is also able to find the optimal number of clusters for the FCM and the k-means algorithm, according to the consistency of the clusters given by the Silhouette Index (SI)
arXiv Detail & Related papers (2022-06-17T19:29:50Z) - Multi-Slice Clustering for 3-order Tensor Data [0.12891210250935145]
Several methods of triclustering of three dimensional data require the specification of the cluster size in each dimension.
We propose a new method, namely the multi-slice clustering (MSC) for a 3-order tensor data set.
The effectiveness of our algorithm is shown on both synthetic and real-world data sets.
arXiv Detail & Related papers (2021-09-22T15:49:48Z) - Cherry-Picking Gradients: Learning Low-Rank Embeddings of Visual Data
via Differentiable Cross-Approximation [53.95297550117153]
We propose an end-to-end trainable framework that processes large-scale visual data tensors by looking emphat a fraction of their entries only.
The proposed approach is particularly useful for large-scale multidimensional grid data, and for tasks that require context over a large receptive field.
arXiv Detail & Related papers (2021-05-29T08:39:57Z) - Overcomplete Deep Subspace Clustering Networks [80.16644725886968]
Experimental results on four benchmark datasets show the effectiveness of the proposed method over DSC and other clustering methods in terms of clustering error.
Our method is also not as dependent as DSC is on where pre-training should be stopped to get the best performance and is also more robust to noise.
arXiv Detail & Related papers (2020-11-16T22:07:18Z) - LSD-C: Linearly Separable Deep Clusters [145.89790963544314]
We present LSD-C, a novel method to identify clusters in an unlabeled dataset.
Our method draws inspiration from recent semi-supervised learning practice and proposes to combine our clustering algorithm with self-supervised pretraining and strong data augmentation.
We show that our approach significantly outperforms competitors on popular public image benchmarks including CIFAR 10/100, STL 10 and MNIST, as well as the document classification dataset Reuters 10K.
arXiv Detail & Related papers (2020-06-17T17:58:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.