Information Theory-Guided Heuristic Progressive Multi-View Coding
- URL: http://arxiv.org/abs/2109.02344v2
- Date: Tue, 22 Aug 2023 03:55:42 GMT
- Title: Information Theory-Guided Heuristic Progressive Multi-View Coding
- Authors: Jiangmeng Li, Wenwen Qiang, Hang Gao, Bing Su, Farid Razzak, Jie Hu,
Changwen Zheng, Hui Xiong
- Abstract summary: Multi-view representation learning captures comprehensive information from multiple views of a shared context.
Few works research the theoretical framework of generalized self-supervised multi-view learning.
We propose a novel information theoretical framework for generalized multi-view learning.
- Score: 43.43739542593827
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-view representation learning captures comprehensive information from
multiple views of a shared context. Recent works intuitively apply contrastive
learning (CL) to learn representations, regarded as a pairwise manner, which is
still scalable: view-specific noise is not filtered in learning view-shared
representations; the fake negative pairs, where the negative terms are actually
within the same class as the positive, and the real negative pairs are
coequally treated; and evenly measuring the similarities between terms might
interfere with optimization. Importantly, few works research the theoretical
framework of generalized self-supervised multi-view learning, especially for
more than two views. To this end, we rethink the existing multi-view learning
paradigm from the information theoretical perspective and then propose a novel
information theoretical framework for generalized multi-view learning. Guided
by it, we build a multi-view coding method with a three-tier progressive
architecture, namely Information theory-guided heuristic Progressive Multi-view
Coding (IPMC). In the distribution-tier, IPMC aligns the distribution between
views to reduce view-specific noise. In the set-tier, IPMC builds self-adjusted
pools for contrasting, which utilizes a view filter to adaptively modify the
pools. Lastly, in the instance-tier, we adopt a designed unified loss to learn
discriminative representations and reduce the gradient interference.
Theoretically and empirically, we demonstrate the superiority of IPMC over
state-of-the-art methods.
Related papers
- Dual Consistent Constraint via Disentangled Consistency and Complementarity for Multi-view Clustering [5.52726833446215]
Multi-view clustering can explore common semantics from multiple views.
Current methods focus on learning consistency in representation, neglecting the contribution of each view's complementarity.
This paper proposes a novel multi-view clustering framework that separates multi-view into shared and private information.
arXiv Detail & Related papers (2025-04-07T02:00:16Z) - Hierarchical Consensus Network for Multiview Feature Learning [1.33134751838052]
Motivated by the theories of CCA and contrastive learning, we propose the hierarchical consensus network (HCN) in this paper.
The proposed method significantly outperforms several state-of-the-art methods.
arXiv Detail & Related papers (2025-02-04T03:19:28Z) - Regularized Contrastive Partial Multi-view Outlier Detection [76.77036536484114]
We propose a novel method named Regularized Contrastive Partial Multi-view Outlier Detection (RCPMOD)
In this framework, we utilize contrastive learning to learn view-consistent information and distinguish outliers by the degree of consistency.
Experimental results on four benchmark datasets demonstrate that our proposed approach could outperform state-of-the-art competitors.
arXiv Detail & Related papers (2024-08-02T14:34:27Z) - Coding for Intelligence from the Perspective of Category [66.14012258680992]
Coding targets compressing and reconstructing data, and intelligence.
Recent trends demonstrate the potential homogeneity of these two fields.
We propose a novel problem of Coding for Intelligence from the category theory view.
arXiv Detail & Related papers (2024-07-01T07:05:44Z) - Rethinking Multi-view Representation Learning via Distilled Disentangling [34.14711778177439]
Multi-view representation learning aims to derive robust representations that are both view-consistent and view-specific from diverse data sources.
This paper presents an in-depth analysis of existing approaches in this domain, highlighting the redundancy between view-consistent and view-specific representations.
We propose an innovative framework for multi-view representation learning, which incorporates a technique we term 'distilled disentangling'
arXiv Detail & Related papers (2024-03-16T11:21:24Z) - Information Theory-Guided Heuristic Progressive Multi-View Coding [25.91836137705842]
Multi-view representation learning aims to capture comprehensive information from multiple views of a shared context.
Recent works intuitively apply contrastive learning to different views in a pairwise manner, which is still scalable.
We propose a novel information theoretical framework for generalized multi-view learning.
arXiv Detail & Related papers (2023-08-21T07:19:47Z) - Investigating and Mitigating the Side Effects of Noisy Views for Self-Supervised Clustering Algorithms in Practical Multi-View Scenarios [35.32285779434823]
Multi-view clustering (MVC) aims at exploring category structures among multi-view data in self-supervised manners.
noisy views might seriously degenerate when the views are noisy in practical multi-view scenarios.
We propose a theoretically grounded deep MVC method (namely MVCAN) to address this issue.
arXiv Detail & Related papers (2023-03-30T09:22:17Z) - Cross-view Graph Contrastive Representation Learning on Partially
Aligned Multi-view Data [52.491074276133325]
Multi-view representation learning has developed rapidly over the past decades and has been applied in many fields.
We propose a new cross-view graph contrastive learning framework, which integrates multi-view information to align data and learn latent representations.
Experiments conducted on several real datasets demonstrate the effectiveness of the proposed method on the clustering and classification tasks.
arXiv Detail & Related papers (2022-11-08T09:19:32Z) - Variational Distillation for Multi-View Learning [104.17551354374821]
We design several variational information bottlenecks to exploit two key characteristics for multi-view representation learning.
Under rigorously theoretical guarantee, our approach enables IB to grasp the intrinsic correlation between observations and semantic labels.
arXiv Detail & Related papers (2022-06-20T03:09:46Z) - Deep Partial Multi-View Learning [94.39367390062831]
We propose a novel framework termed Cross Partial Multi-View Networks (CPM-Nets)
We fifirst provide a formal defifinition of completeness and versatility for multi-view representation.
We then theoretically prove the versatility of the learned latent representations.
arXiv Detail & Related papers (2020-11-12T02:29:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.