Hierarchical Mutual Information Analysis: Towards Multi-view Clustering
in The Wild
- URL: http://arxiv.org/abs/2310.18614v1
- Date: Sat, 28 Oct 2023 06:43:57 GMT
- Title: Hierarchical Mutual Information Analysis: Towards Multi-view Clustering
in The Wild
- Authors: Jiatai Wang, Zhiwei Xu, Xuewen Yang, Xin Wang
- Abstract summary: This work proposes a deep MVC framework where data recovery and alignment are fused in a hierarchically consistent way to maximize the mutual information among different views.
To the best of our knowledge, this could be the first successful attempt to handle the missing and unaligned data problem separately with different learning paradigms.
- Score: 9.380271109354474
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-view clustering (MVC) can explore common semantics from unsupervised
views generated by different sources, and thus has been extensively used in
applications of practical computer vision. Due to the spatio-temporal
asynchronism, multi-view data often suffer from view missing and are unaligned
in real-world applications, which makes it difficult to learn consistent
representations. To address the above issues, this work proposes a deep MVC
framework where data recovery and alignment are fused in a hierarchically
consistent way to maximize the mutual information among different views and
ensure the consistency of their latent spaces. More specifically, we first
leverage dual prediction to fill in missing views while achieving the
instance-level alignment, and then take the contrastive reconstruction to
achieve the class-level alignment. To the best of our knowledge, this could be
the first successful attempt to handle the missing and unaligned data problem
separately with different learning paradigms. Extensive experiments on public
datasets demonstrate that our method significantly outperforms state-of-the-art
methods on multi-view clustering even in the cases of view missing and
unalignment.
Related papers
- Regularized Contrastive Partial Multi-view Outlier Detection [76.77036536484114]
We propose a novel method named Regularized Contrastive Partial Multi-view Outlier Detection (RCPMOD)
In this framework, we utilize contrastive learning to learn view-consistent information and distinguish outliers by the degree of consistency.
Experimental results on four benchmark datasets demonstrate that our proposed approach could outperform state-of-the-art competitors.
arXiv Detail & Related papers (2024-08-02T14:34:27Z) - Incomplete Contrastive Multi-View Clustering with High-Confidence
Guiding [7.305817202715752]
We propose a novel Incomplete Contrastive Multi-View Clustering method with high-confidence guiding (ICMVC)
Firstly, we proposed a multi-view consistency relation transfer plus graph convolutional network to tackle missing values problem.
Secondly, instance-level attention fusion and high-confidence guiding are proposed to exploit the complementary information.
arXiv Detail & Related papers (2023-12-14T07:28:41Z) - Multi-view Fuzzy Representation Learning with Rules based Model [25.997490574254172]
Unsupervised multi-view representation learning has been extensively studied for mining multi-view data.
This paper proposes a new multi-view fuzzy representation learning method based on the interpretable Takagi-Sugeno-Kang fuzzy system (MVRL_FS)
arXiv Detail & Related papers (2023-09-20T17:13:15Z) - DealMVC: Dual Contrastive Calibration for Multi-view Clustering [78.54355167448614]
We propose a novel Dual contrastive calibration network for Multi-View Clustering (DealMVC)
We first design a fusion mechanism to obtain a global cross-view feature. Then, a global contrastive calibration loss is proposed by aligning the view feature similarity graph and the high-confidence pseudo-label graph.
During the training procedure, the interacted cross-view feature is jointly optimized at both local and global levels.
arXiv Detail & Related papers (2023-08-17T14:14:28Z) - Cross-view Graph Contrastive Representation Learning on Partially
Aligned Multi-view Data [52.491074276133325]
Multi-view representation learning has developed rapidly over the past decades and has been applied in many fields.
We propose a new cross-view graph contrastive learning framework, which integrates multi-view information to align data and learn latent representations.
Experiments conducted on several real datasets demonstrate the effectiveness of the proposed method on the clustering and classification tasks.
arXiv Detail & Related papers (2022-11-08T09:19:32Z) - MORI-RAN: Multi-view Robust Representation Learning via Hybrid
Contrastive Fusion [4.36488705757229]
Multi-view representation learning is essential for many multi-view tasks, such as clustering and classification.
We propose a hybrid contrastive fusion algorithm to extract robust view-common representation from unlabeled data.
Experimental results demonstrated that the proposed method outperforms 12 competitive multi-view methods on four real-world datasets.
arXiv Detail & Related papers (2022-08-26T09:58:37Z) - Adaptively-weighted Integral Space for Fast Multiview Clustering [54.177846260063966]
We propose an Adaptively-weighted Integral Space for Fast Multiview Clustering (AIMC) with nearly linear complexity.
Specifically, view generation models are designed to reconstruct the view observations from the latent integral space.
Experiments conducted on several realworld datasets confirm the superiority of the proposed AIMC method.
arXiv Detail & Related papers (2022-08-25T05:47:39Z) - Unsupervised Multi-view Clustering by Squeezing Hybrid Knowledge from
Cross View and Each View [68.88732535086338]
This paper proposes a new multi-view clustering method, low-rank subspace multi-view clustering based on adaptive graph regularization.
Experimental results for five widely used multi-view benchmarks show that our proposed algorithm surpasses other state-of-the-art methods by a clear margin.
arXiv Detail & Related papers (2020-08-23T08:25:06Z) - Generative Partial Multi-View Clustering [133.36721417531734]
We propose a generative partial multi-view clustering model, named as GP-MVC, to address the incomplete multi-view problem.
First, multi-view encoder networks are trained to learn common low-dimensional representations, followed by a clustering layer to capture the consistent cluster structure across multiple views.
Second, view-specific generative adversarial networks are developed to generate the missing data of one view conditioning on the shared representation given by other views.
arXiv Detail & Related papers (2020-03-29T17:48:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.