Scalable Multi-view Clustering via Explicit Kernel Features Maps
- URL: http://arxiv.org/abs/2402.04794v1
- Date: Wed, 7 Feb 2024 12:35:31 GMT
- Title: Scalable Multi-view Clustering via Explicit Kernel Features Maps
- Authors: Chakib Fettal, Lazhar Labiod, Mohamed Nadif
- Abstract summary: A growing awareness of multi-view learning is a consequence of the increasing prevalence of multiple views in real-world applications.
An efficient optimization strategy is proposed, leveraging kernel feature maps to reduce the computational burden while maintaining good clustering performance.
We conduct extensive experiments on real-world benchmark networks of various sizes in order to evaluate the performance of our algorithm against state-of-the-art multi-view subspace clustering methods and attributed-network multi-view approaches.
- Score: 20.610589722626074
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A growing awareness of multi-view learning as an important component in data
science and machine learning is a consequence of the increasing prevalence of
multiple views in real-world applications, especially in the context of
networks. In this paper we introduce a new scalability framework for multi-view
subspace clustering. An efficient optimization strategy is proposed, leveraging
kernel feature maps to reduce the computational burden while maintaining good
clustering performance. The scalability of the algorithm means that it can be
applied to large-scale datasets, including those with millions of data points,
using a standard machine, in a few minutes. We conduct extensive experiments on
real-world benchmark networks of various sizes in order to evaluate the
performance of our algorithm against state-of-the-art multi-view subspace
clustering methods and attributed-network multi-view approaches.
Related papers
- One for all: A novel Dual-space Co-training baseline for Large-scale
Multi-View Clustering [42.92751228313385]
We propose a novel multi-view clustering model, named Dual-space Co-training Large-scale Multi-view Clustering (DSCMC)
The main objective of our approach is to enhance the clustering performance by leveraging co-training in two distinct spaces.
Our algorithm has an approximate linear computational complexity, which guarantees its successful application on large-scale datasets.
arXiv Detail & Related papers (2024-01-28T16:30:13Z) - Efficient and Effective Deep Multi-view Subspace Clustering [9.6753782215283]
We propose a novel deep framework, termed Efficient and Effective deep Multi-View Subspace Clustering (E$2$MVSC)
Instead of a parameterized FC layer, we design a Relation-Metric Net that decouples network parameter scale from sample numbers for greater computational efficiency.
E$2$MVSC yields comparable results to existing methods and achieves state-of-the-art performance in various types of multi-view datasets.
arXiv Detail & Related papers (2023-10-15T03:08:25Z) - Efficient Multi-View Graph Clustering with Local and Global Structure
Preservation [59.49018175496533]
We propose a novel anchor-based multi-view graph clustering framework termed Efficient Multi-View Graph Clustering with Local and Global Structure Preservation (EMVGC-LG)
Specifically, EMVGC-LG jointly optimize anchor construction and graph learning to enhance the clustering quality.
In addition, EMVGC-LG inherits the linear complexity of existing AMVGC methods respecting the sample number.
arXiv Detail & Related papers (2023-08-31T12:12:30Z) - One-step Multi-view Clustering with Diverse Representation [47.41455937479201]
We propose a one-step multi-view clustering with diverse representation method, which incorporates multi-view learning and $k$-means into a unified framework.
We develop an efficient optimization algorithm with proven convergence to solve the resultant problem.
arXiv Detail & Related papers (2023-06-08T02:52:24Z) - Deep Clustering: A Comprehensive Survey [53.387957674512585]
Clustering analysis plays an indispensable role in machine learning and data mining.
Deep clustering, which can learn clustering-friendly representations using deep neural networks, has been broadly applied in a wide range of clustering tasks.
Existing surveys for deep clustering mainly focus on the single-view fields and the network architectures, ignoring the complex application scenarios of clustering.
arXiv Detail & Related papers (2022-10-09T02:31:32Z) - Adaptively-weighted Integral Space for Fast Multiview Clustering [54.177846260063966]
We propose an Adaptively-weighted Integral Space for Fast Multiview Clustering (AIMC) with nearly linear complexity.
Specifically, view generation models are designed to reconstruct the view observations from the latent integral space.
Experiments conducted on several realworld datasets confirm the superiority of the proposed AIMC method.
arXiv Detail & Related papers (2022-08-25T05:47:39Z) - A Comprehensive Survey on Deep Clustering: Taxonomy, Challenges, and
Future Directions [48.97008907275482]
Clustering is a fundamental machine learning task which has been widely studied in the literature.
Deep Clustering, i.e., jointly optimizing the representation learning and clustering, has been proposed and hence attracted growing attention in the community.
We summarize the essential components of deep clustering and categorize existing methods by the ways they design interactions between deep representation learning and clustering.
arXiv Detail & Related papers (2022-06-15T15:05:13Z) - Attentive Multi-View Deep Subspace Clustering Net [4.3386084277869505]
We propose a novel Attentive Multi-View Deep Subspace Nets (AMVDSN)
Our proposed method seeks to find a joint latent representation that explicitly considers both consensus and view-specific information.
The experimental results on seven real-world data sets have demonstrated the effectiveness of our proposed algorithm against some state-of-the-art subspace learning approaches.
arXiv Detail & Related papers (2021-12-23T12:57:26Z) - Tensor-based Intrinsic Subspace Representation Learning for Multi-view
Clustering [18.0093330816895]
We propose a novel-based Intrinsic Subspace Representation (TISRL) for multi-view clustering in this paper.
It can be seen that specific information contained in different views is fully investigated by the rank preserving decomposition.
Experimental results on nine common used real-world multi-view datasets illustrate the superiority of TISRL.
arXiv Detail & Related papers (2020-10-19T03:36:18Z) - Unsupervised Multi-view Clustering by Squeezing Hybrid Knowledge from
Cross View and Each View [68.88732535086338]
This paper proposes a new multi-view clustering method, low-rank subspace multi-view clustering based on adaptive graph regularization.
Experimental results for five widely used multi-view benchmarks show that our proposed algorithm surpasses other state-of-the-art methods by a clear margin.
arXiv Detail & Related papers (2020-08-23T08:25:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.