Multiview Representation Learning for a Union of Subspaces
- URL: http://arxiv.org/abs/1912.12766v1
- Date: Mon, 30 Dec 2019 00:44:13 GMT
- Title: Multiview Representation Learning for a Union of Subspaces
- Authors: Nils Holzenberger and Raman Arora
- Abstract summary: We show that a proposed model and a set of simple mixtures yield improvements over standard CCA.
Our correlation-based objective meaningfully generalizes the CCA objective to a mixture of CCA models.
- Score: 38.68763142172997
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Canonical correlation analysis (CCA) is a popular technique for learning
representations that are maximally correlated across multiple views in data. In
this paper, we extend the CCA based framework for learning a multiview mixture
model. We show that the proposed model and a set of simple heuristics yield
improvements over standard CCA, as measured in terms of performance on
downstream tasks. Our experimental results show that our correlation-based
objective meaningfully generalizes the CCA objective to a mixture of CCA
models.
Related papers
- SLRL: Structured Latent Representation Learning for Multi-view Clustering [24.333292079699554]
Multi-View Clustering (MVC) aims to exploit the inherent consistency and complementarity among different views to improve clustering outcomes.
Despite extensive research in MVC, most existing methods focus predominantly on harnessing complementary information across views to enhance clustering effectiveness.
We introduce a novel framework, termed Structured Latent Representation Learning based Multi-View Clustering method.
arXiv Detail & Related papers (2024-07-11T09:43:57Z) - Explore In-Context Segmentation via Latent Diffusion Models [132.26274147026854]
latent diffusion model (LDM) is an effective minimalist for in-context segmentation.
We build a new and fair in-context segmentation benchmark that includes both image and video datasets.
arXiv Detail & Related papers (2024-03-14T17:52:31Z) - A Bayesian Methodology for Estimation for Sparse Canonical Correlation [0.0]
Canonical Correlation Analysis (CCA) is a statistical procedure for identifying relationships between data sets.
ScSCCA is a rapidly emerging methodological area that aims for robust modeling of the interrelations between the different data modalities.
We propose a novel ScSCCA approach where we employ a Bayesian infinite factor model and aim to achieve robust estimation.
arXiv Detail & Related papers (2023-10-30T15:14:25Z) - A Novel Approach for Effective Multi-View Clustering with
Information-Theoretic Perspective [24.630259061774836]
This study presents a new approach called Sufficient Multi-View Clustering (SUMVC) that examines the multi-view clustering framework from an information-theoretic standpoint.
Firstly, we develop a simple and reliable multi-view clustering method SCMVC that employs variational analysis to generate consistent information.
Secondly, we propose a sufficient representation lower bound to enhance consistent information and minimise unnecessary information among views.
arXiv Detail & Related papers (2023-09-25T09:41:11Z) - Unified Multi-View Orthonormal Non-Negative Graph Based Clustering
Framework [74.25493157757943]
We formulate a novel clustering model, which exploits the non-negative feature property and incorporates the multi-view information into a unified joint learning framework.
We also explore, for the first time, the multi-model non-negative graph-based approach to clustering data based on deep features.
arXiv Detail & Related papers (2022-11-03T08:18:27Z) - Using Representation Expressiveness and Learnability to Evaluate
Self-Supervised Learning Methods [61.49061000562676]
We introduce Cluster Learnability (CL) to assess learnability.
CL is measured in terms of the performance of a KNN trained to predict labels obtained by clustering the representations with K-means.
We find that CL better correlates with in-distribution model performance than other competing recent evaluation schemes.
arXiv Detail & Related papers (2022-06-02T19:05:13Z) - Variational Interpretable Learning from Multi-view Data [2.687817337319978]
DICCA is designed to disentangle both the shared and view-specific variations for multi-view data.
Empirical results on real-world datasets show that our methods are competitive across domains.
arXiv Detail & Related papers (2022-02-28T01:56:44Z) - Semantic Correspondence with Transformers [68.37049687360705]
We propose Cost Aggregation with Transformers (CATs) to find dense correspondences between semantically similar images.
We include appearance affinity modelling to disambiguate the initial correlation maps and multi-level aggregation.
We conduct experiments to demonstrate the effectiveness of the proposed model over the latest methods and provide extensive ablation studies.
arXiv Detail & Related papers (2021-06-04T14:39:03Z) - Agglomerative Neural Networks for Multi-view Clustering [109.55325971050154]
We propose the agglomerative analysis to approximate the optimal consensus view.
We present Agglomerative Neural Network (ANN) based on Constrained Laplacian Rank to cluster multi-view data directly.
Our evaluations against several state-of-the-art multi-view clustering approaches on four popular datasets show the promising view-consensus analysis ability of ANN.
arXiv Detail & Related papers (2020-05-12T05:39:10Z) - Generalized Canonical Correlation Analysis: A Subspace Intersection
Approach [30.475159163815505]
Generalized Canonical Correlation Analysis (GCCA) is an important tool that finds numerous applications in data mining, machine learning, and artificial intelligence.
This paper offers a fresh algebraic perspective of GCCA based on a (bi-linear) generative model that naturally captures its essence.
A novel GCCA algorithm is proposed based on subspace intersection, which scales up to handle large GCCA tasks.
arXiv Detail & Related papers (2020-03-25T04:04:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.