Information Theory-Guided Heuristic Progressive Multi-View Coding
- URL: http://arxiv.org/abs/2308.10522v3
- Date: Wed, 23 Aug 2023 08:49:54 GMT
- Title: Information Theory-Guided Heuristic Progressive Multi-View Coding
- Authors: Jiangmeng Li, Hang Gao, Wenwen Qiang, Changwen Zheng
- Abstract summary: Multi-view representation learning aims to capture comprehensive information from multiple views of a shared context.
Recent works intuitively apply contrastive learning to different views in a pairwise manner, which is still scalable.
We propose a novel information theoretical framework for generalized multi-view learning.
- Score: 25.91836137705842
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-view representation learning aims to capture comprehensive information
from multiple views of a shared context. Recent works intuitively apply
contrastive learning to different views in a pairwise manner, which is still
scalable: view-specific noise is not filtered in learning view-shared
representations; the fake negative pairs, where the negative terms are actually
within the same class as the positive, and the real negative pairs are
coequally treated; evenly measuring the similarities between terms might
interfere with optimization. Importantly, few works study the theoretical
framework of generalized self-supervised multi-view learning, especially for
more than two views. To this end, we rethink the existing multi-view learning
paradigm from the perspective of information theory and then propose a novel
information theoretical framework for generalized multi-view learning. Guided
by it, we build a multi-view coding method with a three-tier progressive
architecture, namely Information theory-guided hierarchical Progressive
Multi-view Coding (IPMC). In the distribution-tier, IPMC aligns the
distribution between views to reduce view-specific noise. In the set-tier, IPMC
constructs self-adjusted contrasting pools, which are adaptively modified by a
view filter. Lastly, in the instance-tier, we adopt a designed unified loss to
learn representations and reduce the gradient interference. Theoretically and
empirically, we demonstrate the superiority of IPMC over state-of-the-art
methods.
Related papers
- Dual Consistent Constraint via Disentangled Consistency and Complementarity for Multi-view Clustering [5.52726833446215]
Multi-view clustering can explore common semantics from multiple views.
Current methods focus on learning consistency in representation, neglecting the contribution of each view's complementarity.
This paper proposes a novel multi-view clustering framework that separates multi-view into shared and private information.
arXiv Detail & Related papers (2025-04-07T02:00:16Z) - Hierarchical Consensus Network for Multiview Feature Learning [1.33134751838052]
Motivated by the theories of CCA and contrastive learning, we propose the hierarchical consensus network (HCN) in this paper.
The proposed method significantly outperforms several state-of-the-art methods.
arXiv Detail & Related papers (2025-02-04T03:19:28Z) - Balanced Multi-view Clustering [56.17836963920012]
Multi-view clustering (MvC) aims to integrate information from different views to enhance the capability of the model in capturing the underlying data structures.
The widely used joint training paradigm in MvC is potentially not fully leverage the multi-view information.
We propose a novel balanced multi-view clustering (BMvC) method, which introduces a view-specific contrastive regularization (VCR) to modulate the optimization of each view.
arXiv Detail & Related papers (2025-01-05T14:42:47Z) - Regularized Contrastive Partial Multi-view Outlier Detection [76.77036536484114]
We propose a novel method named Regularized Contrastive Partial Multi-view Outlier Detection (RCPMOD)
In this framework, we utilize contrastive learning to learn view-consistent information and distinguish outliers by the degree of consistency.
Experimental results on four benchmark datasets demonstrate that our proposed approach could outperform state-of-the-art competitors.
arXiv Detail & Related papers (2024-08-02T14:34:27Z) - Rethinking Multi-view Representation Learning via Distilled Disentangling [34.14711778177439]
Multi-view representation learning aims to derive robust representations that are both view-consistent and view-specific from diverse data sources.
This paper presents an in-depth analysis of existing approaches in this domain, highlighting the redundancy between view-consistent and view-specific representations.
We propose an innovative framework for multi-view representation learning, which incorporates a technique we term 'distilled disentangling'
arXiv Detail & Related papers (2024-03-16T11:21:24Z) - TCGF: A unified tensorized consensus graph framework for multi-view
representation learning [27.23929515170454]
This paper proposes a universal multi-view representation learning framework named Consensus Graph Framework (TCGF)
It first provides a unified framework for existing multi-view works to exploit the representations for individual view.
Then, stacks them into a tensor under alignment basics as a high-order representation, allowing for the smooth propagation of consistency.
arXiv Detail & Related papers (2023-09-14T19:29:14Z) - Investigating and Mitigating the Side Effects of Noisy Views for Self-Supervised Clustering Algorithms in Practical Multi-View Scenarios [35.32285779434823]
Multi-view clustering (MVC) aims at exploring category structures among multi-view data in self-supervised manners.
noisy views might seriously degenerate when the views are noisy in practical multi-view scenarios.
We propose a theoretically grounded deep MVC method (namely MVCAN) to address this issue.
arXiv Detail & Related papers (2023-03-30T09:22:17Z) - Cross-view Graph Contrastive Representation Learning on Partially
Aligned Multi-view Data [52.491074276133325]
Multi-view representation learning has developed rapidly over the past decades and has been applied in many fields.
We propose a new cross-view graph contrastive learning framework, which integrates multi-view information to align data and learn latent representations.
Experiments conducted on several real datasets demonstrate the effectiveness of the proposed method on the clustering and classification tasks.
arXiv Detail & Related papers (2022-11-08T09:19:32Z) - Variational Distillation for Multi-View Learning [104.17551354374821]
We design several variational information bottlenecks to exploit two key characteristics for multi-view representation learning.
Under rigorously theoretical guarantee, our approach enables IB to grasp the intrinsic correlation between observations and semantic labels.
arXiv Detail & Related papers (2022-06-20T03:09:46Z) - Information Theory-Guided Heuristic Progressive Multi-View Coding [43.43739542593827]
Multi-view representation learning captures comprehensive information from multiple views of a shared context.
Few works research the theoretical framework of generalized self-supervised multi-view learning.
We propose a novel information theoretical framework for generalized multi-view learning.
arXiv Detail & Related papers (2021-09-06T10:32:24Z) - Deep Partial Multi-View Learning [94.39367390062831]
We propose a novel framework termed Cross Partial Multi-View Networks (CPM-Nets)
We fifirst provide a formal defifinition of completeness and versatility for multi-view representation.
We then theoretically prove the versatility of the learned latent representations.
arXiv Detail & Related papers (2020-11-12T02:29:29Z) - Embedded Deep Bilinear Interactive Information and Selective Fusion for
Multi-view Learning [70.67092105994598]
We propose a novel multi-view learning framework to make the multi-view classification better aimed at the above-mentioned two aspects.
In particular, we train different deep neural networks to learn various intra-view representations.
Experiments on six publicly available datasets demonstrate the effectiveness of the proposed method.
arXiv Detail & Related papers (2020-07-13T01:13:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.