Tucker-O-Minus Decomposition for Multi-view Tensor Subspace Clustering
- URL: http://arxiv.org/abs/2210.12638v1
- Date: Sun, 23 Oct 2022 07:20:22 GMT
- Title: Tucker-O-Minus Decomposition for Multi-view Tensor Subspace Clustering
- Authors: Yingcong Lu, Yipeng Liu, Zhen Long, Zhangxin Chen, Ce Zhu
- Abstract summary: We propose a new tensor decomposition called Tucker-O-Minus Decomposition (TOMD) for multi-view clustering.
Numerical experiments on six benchmark data sets demonstrate the superiority of our proposed method in terms of F-score, precision, recall, normalized mutual information, adjusted rand index, and accuracy.
- Score: 36.790637575875635
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With powerful ability to exploit latent structure of self-representation
information, different tensor decompositions have been employed into low rank
multi-view clustering (LRMVC) models for achieving significant performance.
However, current approaches suffer from a series of problems related to those
tensor decomposition, such as the unbalanced matricization scheme, rotation
sensitivity, deficient correlations capture and so forth. All these will lead
to LRMVC having insufficient access to global information, which is contrary to
the target of multi-view clustering. To alleviate these problems, we propose a
new tensor decomposition called Tucker-O-Minus Decomposition (TOMD) for
multi-view clustering. Specifically, based on the Tucker format, we
additionally employ the O-minus structure, which consists of a circle with an
efficient bridge linking two weekly correlated factors. In this way, the core
tensor in Tucker format is replaced by the O-minus architecture with a more
balanced structure, and the enhanced capacity of capturing the global low rank
information will be achieved. The proposed TOMD also provides more compact and
powerful representation abilities for the self-representation tensor,
simultaneously. The alternating direction method of multipliers is used to
solve the proposed model TOMD-MVC. Numerical experiments on six benchmark data
sets demonstrate the superiority of our proposed method in terms of F-score,
precision, recall, normalized mutual information, adjusted rand index, and
accuracy.
Related papers
- Self-Supervised Graph Embedding Clustering [70.36328717683297]
K-means one-step dimensionality reduction clustering method has made some progress in addressing the curse of dimensionality in clustering tasks.
We propose a unified framework that integrates manifold learning with K-means, resulting in the self-supervised graph embedding framework.
arXiv Detail & Related papers (2024-09-24T08:59:51Z) - Interpretable Multi-View Clustering Based on Anchor Graph Tensor Factorization [64.00146569922028]
Multi-view clustering methods based on anchor graph factorization lack adequate cluster interpretability for the decomposed matrix.
We address this limitation by using non-negative tensor factorization to decompose an anchor graph tensor that combines anchor graphs from multiple views.
arXiv Detail & Related papers (2024-04-01T03:23:55Z) - Distributional Reduction: Unifying Dimensionality Reduction and Clustering with Gromov-Wasserstein [56.62376364594194]
Unsupervised learning aims to capture the underlying structure of potentially large and high-dimensional datasets.
In this work, we revisit these approaches under the lens of optimal transport and exhibit relationships with the Gromov-Wasserstein problem.
This unveils a new general framework, called distributional reduction, that recovers DR and clustering as special cases and allows addressing them jointly within a single optimization problem.
arXiv Detail & Related papers (2024-02-03T19:00:19Z) - Multi-view MERA Subspace Clustering [42.33688860165733]
Multi-view subspace clustering (MSC) can capture high-order correlation in the self-representation tensor.
We propose a low-rank MERA based MSC (MERA-MSC) algorithm, where MERA factorizes a tensor into contractions of one top core factor and the rest orthogonal/semi-orthogonal factors.
We extend MERA-MSC by incorporating anchor learning to develop a scalable low-rank MERA based multi-view clustering method (sMREA-MVC)
arXiv Detail & Related papers (2023-05-16T01:41:10Z) - Adaptively Topological Tensor Network for Multi-view Subspace Clustering [36.790637575875635]
Multi-view subspace clustering uses learned self-representation tensors to exploit low rank information.
A pre-defined tensor decomposition may not fully exploit low rank information for a certain dataset.
We propose the adaptively topological tensor network (ATTN) by determining the edge ranks from the structural information of the self-representation tensor.
arXiv Detail & Related papers (2023-05-01T08:28:33Z) - Hyper-Laplacian Regularized Concept Factorization in Low-rank Tensor
Space for Multi-view Clustering [0.0]
We propose a hyper-Laplacian regularized concept factorization (HLRCF) in low-rank tensor space for multi-view clustering.
Specifically, we adopt the concept factorization to explore the latent cluster-wise representation of each view.
Considering that different tensor singular values associate structural information with unequal importance, we develop a self-weighted tensor Schatten p-norm.
arXiv Detail & Related papers (2023-04-22T15:46:58Z) - Multi-View Clustering via Semi-non-negative Tensor Factorization [120.87318230985653]
We develop a novel multi-view clustering based on semi-non-negative tensor factorization (Semi-NTF)
Our model directly considers the between-view relationship and exploits the between-view complementary information.
In addition, we provide an optimization algorithm for the proposed method and prove mathematically that the algorithm always converges to the stationary KKT point.
arXiv Detail & Related papers (2023-03-29T14:54:19Z) - ER: Equivariance Regularizer for Knowledge Graph Completion [107.51609402963072]
We propose a new regularizer, namely, Equivariance Regularizer (ER)
ER can enhance the generalization ability of the model by employing the semantic equivariance between the head and tail entities.
The experimental results indicate a clear and substantial improvement over the state-of-the-art relation prediction methods.
arXiv Detail & Related papers (2022-06-24T08:18:05Z) - Low-Rank and Sparse Enhanced Tucker Decomposition for Tensor Completion [3.498620439731324]
We introduce a unified low-rank and sparse enhanced Tucker decomposition model for tensor completion.
Our model possesses a sparse regularization term to promote a sparse core tensor, which is beneficial for tensor data compression.
It is remarkable that our model is able to deal with different types of real-world data sets, since it exploits the potential periodicity and inherent correlation properties appeared in tensors.
arXiv Detail & Related papers (2020-10-01T12:45:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.