Latent Correlation-Based Multiview Learning and Self-Supervision: A
Unifying Perspective
- URL: http://arxiv.org/abs/2106.07115v2
- Date: Thu, 17 Jun 2021 16:51:29 GMT
- Title: Latent Correlation-Based Multiview Learning and Self-Supervision: A
Unifying Perspective
- Authors: Qi Lyu, Xiao Fu, Weiran Wang and Songtao Lu
- Abstract summary: This work puts forth a theory-backed framework for unsupervised multiview learning.
Our development starts with proposing a multiview model, where each view is a nonlinear mixture of shared and private components.
In addition, the private information in each view can be provably disentangled from the shared using proper regularization design.
- Score: 41.80156041871873
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multiple views of data, both naturally acquired (e.g., image and audio) and
artificially produced (e.g., via adding different noise to data samples), have
proven useful in enhancing representation learning. Natural views are often
handled by multiview analysis tools, e.g., (deep) canonical correlation
analysis [(D)CCA], while the artificial ones are frequently used in
self-supervised learning (SSL) paradigms, e.g., SimCLR and Barlow Twins. Both
types of approaches often involve learning neural feature extractors such that
the embeddings of data exhibit high cross-view correlations. Although
intuitive, the effectiveness of correlation-based neural embedding is only
empirically validated. This work puts forth a theory-backed framework for
unsupervised multiview learning. Our development starts with proposing a
multiview model, where each view is a nonlinear mixture of shared and private
components. Consequently, the learning problem boils down to shared/private
component identification and disentanglement. Under this model, latent
correlation maximization is shown to guarantee the extraction of the shared
components across views (up to certain ambiguities). In addition, the private
information in each view can be provably disentangled from the shared using
proper regularization design. The method is tested on a series of tasks, e.g.,
downstream clustering, which all show promising performance. Our development
also provides a unifying perspective for understanding various DCCA and SSL
schemes.
Related papers
- Regularized Contrastive Partial Multi-view Outlier Detection [76.77036536484114]
We propose a novel method named Regularized Contrastive Partial Multi-view Outlier Detection (RCPMOD)
In this framework, we utilize contrastive learning to learn view-consistent information and distinguish outliers by the degree of consistency.
Experimental results on four benchmark datasets demonstrate that our proposed approach could outperform state-of-the-art competitors.
arXiv Detail & Related papers (2024-08-02T14:34:27Z) - Multi-View Causal Representation Learning with Partial Observability [36.37049791756438]
We present a unified framework for studying identifiability of representations learned from simultaneously observed views.
We prove that the information shared across all subsets of any number of views can be learned up to a smooth bijection using contrastive learning.
We experimentally validate our claims on numerical, image, and multi-modal data sets.
arXiv Detail & Related papers (2023-11-07T15:07:08Z) - Hierarchical Mutual Information Analysis: Towards Multi-view Clustering
in The Wild [9.380271109354474]
This work proposes a deep MVC framework where data recovery and alignment are fused in a hierarchically consistent way to maximize the mutual information among different views.
To the best of our knowledge, this could be the first successful attempt to handle the missing and unaligned data problem separately with different learning paradigms.
arXiv Detail & Related papers (2023-10-28T06:43:57Z) - Cross-view Graph Contrastive Representation Learning on Partially
Aligned Multi-view Data [52.491074276133325]
Multi-view representation learning has developed rapidly over the past decades and has been applied in many fields.
We propose a new cross-view graph contrastive learning framework, which integrates multi-view information to align data and learn latent representations.
Experiments conducted on several real datasets demonstrate the effectiveness of the proposed method on the clustering and classification tasks.
arXiv Detail & Related papers (2022-11-08T09:19:32Z) - ACTIVE:Augmentation-Free Graph Contrastive Learning for Partial
Multi-View Clustering [52.491074276133325]
We propose an augmentation-free graph contrastive learning framework to solve the problem of partial multi-view clustering.
The proposed approach elevates instance-level contrastive learning and missing data inference to the cluster-level, effectively mitigating the impact of individual missing data on clustering.
arXiv Detail & Related papers (2022-03-01T02:32:25Z) - Variational Interpretable Learning from Multi-view Data [2.687817337319978]
DICCA is designed to disentangle both the shared and view-specific variations for multi-view data.
Empirical results on real-world datasets show that our methods are competitive across domains.
arXiv Detail & Related papers (2022-02-28T01:56:44Z) - Revisiting Contrastive Methods for Unsupervised Learning of Visual
Representations [78.12377360145078]
Contrastive self-supervised learning has outperformed supervised pretraining on many downstream tasks like segmentation and object detection.
In this paper, we first study how biases in the dataset affect existing methods.
We show that current contrastive approaches work surprisingly well across: (i) object- versus scene-centric, (ii) uniform versus long-tailed and (iii) general versus domain-specific datasets.
arXiv Detail & Related papers (2021-06-10T17:59:13Z) - Uncorrelated Semi-paired Subspace Learning [7.20500993803316]
We propose a generalized uncorrelated multi-view subspace learning framework.
To showcase the flexibility of the framework, we instantiate five new semi-paired models for both unsupervised and semi-supervised learning.
Our proposed models perform competitively to or better than the baselines.
arXiv Detail & Related papers (2020-11-22T22:14:20Z) - Contrastive learning, multi-view redundancy, and linear models [38.80336134485453]
A popular self-supervised approach to representation learning is contrastive learning.
This work provides a theoretical analysis of contrastive learning in the multi-view setting.
arXiv Detail & Related papers (2020-08-24T01:31:47Z) - Generative Partial Multi-View Clustering [133.36721417531734]
We propose a generative partial multi-view clustering model, named as GP-MVC, to address the incomplete multi-view problem.
First, multi-view encoder networks are trained to learn common low-dimensional representations, followed by a clustering layer to capture the consistent cluster structure across multiple views.
Second, view-specific generative adversarial networks are developed to generate the missing data of one view conditioning on the shared representation given by other views.
arXiv Detail & Related papers (2020-03-29T17:48:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.