Towards Comprehensive Information-theoretic Multi-view Learning
- URL: http://arxiv.org/abs/2509.02084v1
- Date: Tue, 02 Sep 2025 08:34:04 GMT
- Title: Towards Comprehensive Information-theoretic Multi-view Learning
- Authors: Long Shi, Yunshan Ye, Wenjie Wang, Tao Lei, Yu Zhao, Gang Kou, Badong Chen,
- Abstract summary: CIML considers the potential predictive capabilities of both common and unique information based on information theory.<n>We theoretically prove that the learned joint representation is predictively sufficient for the downstream task.
- Score: 49.199817029783446
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Information theory has inspired numerous advancements in multi-view learning. Most multi-view methods incorporating information-theoretic principles rely an assumption called multi-view redundancy which states that common information between views is necessary and sufficient for down-stream tasks. This assumption emphasizes the importance of common information for prediction, but inherently ignores the potential of unique information in each view that could be predictive to the task. In this paper, we propose a comprehensive information-theoretic multi-view learning framework named CIML, which discards the assumption of multi-view redundancy. Specifically, CIML considers the potential predictive capabilities of both common and unique information based on information theory. First, the common representation learning maximizes Gacs-Korner common information to extract shared features and then compresses this information to learn task-relevant representations based on the Information Bottleneck (IB). For unique representation learning, IB is employed to achieve the most compressed unique representation for each view while simultaneously minimizing the mutual information between unique and common representations, as well as among different unique representations. Importantly, we theoretically prove that the learned joint representation is predictively sufficient for the downstream task. Extensive experimental results have demonstrated the superiority of our model over several state-of-art methods. The code is released on CIML.
Related papers
- Towards the Generalization of Multi-view Learning: An Information-theoretical Analysis [28.009990407017618]
We develop information-theoretic generalization bounds for multi-view learning.<n>We derive novel data-dependent bounds under both leave-one-out and supersample settings.<n>In the interpolating regime, we further establish the fast-rate bound for multi-view learning.
arXiv Detail & Related papers (2025-01-28T07:47:19Z) - Discovering Common Information in Multi-view Data [35.37807004353416]
We introduce an innovative and mathematically rigorous definition for computing common information from multi-view data.
We develop a novel supervised multi-view learning framework to capture both common and unique information.
arXiv Detail & Related papers (2024-06-21T10:47:06Z) - TCGF: A unified tensorized consensus graph framework for multi-view
representation learning [27.23929515170454]
This paper proposes a universal multi-view representation learning framework named Consensus Graph Framework (TCGF)
It first provides a unified framework for existing multi-view works to exploit the representations for individual view.
Then, stacks them into a tensor under alignment basics as a high-order representation, allowing for the smooth propagation of consistency.
arXiv Detail & Related papers (2023-09-14T19:29:14Z) - Factorized Contrastive Learning: Going Beyond Multi-view Redundancy [116.25342513407173]
This paper proposes FactorCL, a new multimodal representation learning method to go beyond multi-view redundancy.
On large-scale real-world datasets, FactorCL captures both shared and unique information and achieves state-of-the-art results.
arXiv Detail & Related papers (2023-06-08T15:17:04Z) - Variational Distillation for Multi-View Learning [104.17551354374821]
We design several variational information bottlenecks to exploit two key characteristics for multi-view representation learning.
Under rigorously theoretical guarantee, our approach enables IB to grasp the intrinsic correlation between observations and semantic labels.
arXiv Detail & Related papers (2022-06-20T03:09:46Z) - Which Mutual-Information Representation Learning Objectives are
Sufficient for Control? [80.2534918595143]
Mutual information provides an appealing formalism for learning representations of data.
This paper formalizes the sufficiency of a state representation for learning and representing the optimal policy.
Surprisingly, we find that two of these objectives can yield insufficient representations given mild and common assumptions on the structure of the MDP.
arXiv Detail & Related papers (2021-06-14T10:12:34Z) - Collaborative Attention Mechanism for Multi-View Action Recognition [75.33062629093054]
We propose a collaborative attention mechanism (CAM) for solving the multi-view action recognition problem.
The proposed CAM detects the attention differences among multi-view, and adaptively integrates frame-level information to benefit each other.
Experiments on four action datasets illustrate the proposed CAM achieves better results for each view and also boosts multi-view performance.
arXiv Detail & Related papers (2020-09-14T17:33:10Z) - Self-supervised Learning from a Multi-view Perspective [121.63655399591681]
We show that self-supervised representations can extract task-relevant information and discard task-irrelevant information.
Our theoretical framework paves the way to a larger space of self-supervised learning objective design.
arXiv Detail & Related papers (2020-06-10T00:21:35Z) - Learning Robust Representations via Multi-View Information Bottleneck [41.65544605954621]
Original formulation requires labeled data to identify superfluous information.
We extend this ability to the multi-view unsupervised setting, where two views of the same underlying entity are provided but the label is unknown.
A theoretical analysis leads to the definition of a new multi-view model that produces state-of-the-art results on the Sketchy dataset and label-limited versions of the MIR-Flickr dataset.
arXiv Detail & Related papers (2020-02-17T16:01:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.