Joint Featurewise Weighting and Lobal Structure Learning for Multi-view
Subspace Clustering
- URL: http://arxiv.org/abs/2007.12829v1
- Date: Sat, 25 Jul 2020 01:57:57 GMT
- Title: Joint Featurewise Weighting and Lobal Structure Learning for Multi-view
Subspace Clustering
- Authors: Shi-Xun Lina, Guo Zhongb, Ting Shu
- Abstract summary: Multi-view clustering integrates multiple feature sets, which reveal distinct aspects of the data and provide complementary information to each other.
Most existing multi-view clustering methods only aim to explore the consistency of all views while ignoring the local structure of each view.
We propose a novel multi-view subspace clustering method via simultaneously assigning weights for different features and capturing local information of data in view-specific self-representation feature spaces.
- Score: 3.093890460224435
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-view clustering integrates multiple feature sets, which reveal distinct
aspects of the data and provide complementary information to each other, to
improve the clustering performance. It remains challenging to effectively
exploit complementary information across multiple views since the original data
often contain noise and are highly redundant. Moreover, most existing
multi-view clustering methods only aim to explore the consistency of all views
while ignoring the local structure of each view. However, it is necessary to
take the local structure of each view into consideration, because different
views would present different geometric structures while admitting the same
cluster structure. To address the above issues, we propose a novel multi-view
subspace clustering method via simultaneously assigning weights for different
features and capturing local information of data in view-specific
self-representation feature spaces. Especially, a common cluster structure
regularization is adopted to guarantee consistency among different views. An
efficient algorithm based on an augmented Lagrangian multiplier is also
developed to solve the associated optimization problem. Experiments conducted
on several benchmark datasets demonstrate that the proposed method achieves
state-of-the-art performance. We provide the Matlab code on
https://github.com/Ekin102003/JFLMSC.
Related papers
- Discriminative Anchor Learning for Efficient Multi-view Clustering [59.11406089896875]
We propose discriminative anchor learning for multi-view clustering (DALMC)
We learn discriminative view-specific feature representations according to the original dataset.
We build anchors from different views based on these representations, which increase the quality of the shared anchor graph.
arXiv Detail & Related papers (2024-09-25T13:11:17Z) - One for all: A novel Dual-space Co-training baseline for Large-scale
Multi-View Clustering [42.92751228313385]
We propose a novel multi-view clustering model, named Dual-space Co-training Large-scale Multi-view Clustering (DSCMC)
The main objective of our approach is to enhance the clustering performance by leveraging co-training in two distinct spaces.
Our algorithm has an approximate linear computational complexity, which guarantees its successful application on large-scale datasets.
arXiv Detail & Related papers (2024-01-28T16:30:13Z) - Anchor-based Multi-view Subspace Clustering with Hierarchical Feature Descent [46.86939432189035]
We propose Anchor-based Multi-view Subspace Clustering with Hierarchical Feature Descent.
Our proposed model consistently outperforms the state-of-the-art techniques.
arXiv Detail & Related papers (2023-10-11T03:29:13Z) - Efficient Multi-View Graph Clustering with Local and Global Structure
Preservation [59.49018175496533]
We propose a novel anchor-based multi-view graph clustering framework termed Efficient Multi-View Graph Clustering with Local and Global Structure Preservation (EMVGC-LG)
Specifically, EMVGC-LG jointly optimize anchor construction and graph learning to enhance the clustering quality.
In addition, EMVGC-LG inherits the linear complexity of existing AMVGC methods respecting the sample number.
arXiv Detail & Related papers (2023-08-31T12:12:30Z) - Scalable Incomplete Multi-View Clustering with Structure Alignment [71.62781659121092]
In this paper, we propose a novel incomplete anchor graph learning framework.
We construct the view-specific anchor graph to capture the complementary information from different views.
The time and space complexity of the proposed SIMVC-SA is proven to be linearly correlated with the number of samples.
arXiv Detail & Related papers (2023-08-31T08:30:26Z) - Adaptively-weighted Integral Space for Fast Multiview Clustering [54.177846260063966]
We propose an Adaptively-weighted Integral Space for Fast Multiview Clustering (AIMC) with nearly linear complexity.
Specifically, view generation models are designed to reconstruct the view observations from the latent integral space.
Experiments conducted on several realworld datasets confirm the superiority of the proposed AIMC method.
arXiv Detail & Related papers (2022-08-25T05:47:39Z) - Deep Incomplete Multi-View Multiple Clusterings [41.43164409639238]
We introduce a deep incomplete multi-view multiple clusterings framework, which achieves the completion of data view and multiple shared representations simultaneously.
Experiments on benchmark datasets confirm that DiMVMC outperforms the state-of-the-art competitors in generating multiple clusterings with high diversity and quality.
arXiv Detail & Related papers (2020-10-02T08:01:24Z) - Unsupervised Multi-view Clustering by Squeezing Hybrid Knowledge from
Cross View and Each View [68.88732535086338]
This paper proposes a new multi-view clustering method, low-rank subspace multi-view clustering based on adaptive graph regularization.
Experimental results for five widely used multi-view benchmarks show that our proposed algorithm surpasses other state-of-the-art methods by a clear margin.
arXiv Detail & Related papers (2020-08-23T08:25:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.