Multi-view Orthonormalized Partial Least Squares: Regularizations and
Deep Extensions
- URL: http://arxiv.org/abs/2007.05028v1
- Date: Thu, 9 Jul 2020 19:00:39 GMT
- Title: Multi-view Orthonormalized Partial Least Squares: Regularizations and
Deep Extensions
- Authors: Li Wang and Ren-Cang Li and Wen-Wei
- Abstract summary: We establish a family of subspace-based learning method for multi-view learning using the least squares as the fundamental basis.
We propose a unified multi-view learning framework to learn a classifier over a common latent space shared by all views.
- Score: 8.846165479467324
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We establish a family of subspace-based learning method for multi-view
learning using the least squares as the fundamental basis. Specifically, we
investigate orthonormalized partial least squares (OPLS) and study its
important properties for both multivariate regression and classification.
Building on the least squares reformulation of OPLS, we propose a unified
multi-view learning framework to learn a classifier over a common latent space
shared by all views. The regularization technique is further leveraged to
unleash the power of the proposed framework by providing three generic types of
regularizers on its inherent ingredients including model parameters, decision
values and latent projected points. We instantiate a set of regularizers in
terms of various priors. The proposed framework with proper choices of
regularizers not only can recast existing methods, but also inspire new models.
To further improve the performance of the proposed framework on complex real
problems, we propose to learn nonlinear transformations parameterized by deep
networks. Extensive experiments are conducted to compare various methods on
nine data sets with different numbers of views in terms of both feature
extraction and cross-modal retrieval.
Related papers
- From Bricks to Bridges: Product of Invariances to Enhance Latent Space Communication [19.336940758147442]
It has been observed that representations learned by distinct neural networks conceal structural similarities when the models are trained under similar inductive biases.
We introduce a versatile method to directly incorporate a set of invariances into the representations, constructing a product space of invariant components on top of the latent representations.
We validate our solution on classification and reconstruction tasks, observing consistent latent similarity and downstream performance improvements in a zero-shot stitching setting.
arXiv Detail & Related papers (2023-10-02T13:55:38Z) - Multi-View Class Incremental Learning [57.14644913531313]
Multi-view learning (MVL) has gained great success in integrating information from multiple perspectives of a dataset to improve downstream task performance.
This paper investigates a novel paradigm called multi-view class incremental learning (MVCIL), where a single model incrementally classifies new classes from a continual stream of views.
arXiv Detail & Related papers (2023-06-16T08:13:41Z) - Differentiable Random Partition Models [15.51229558339278]
We propose a novel two-step method for inferring partitions, which allows its usage in variational inference tasks.
Our method works by inferring the number of elements per subset and, second, by filling these subsets in a learned order.
We highlight the versatility of our general-purpose approach on three different challenging experiments.
arXiv Detail & Related papers (2023-05-26T11:45:10Z) - Unified Multi-View Orthonormal Non-Negative Graph Based Clustering
Framework [74.25493157757943]
We formulate a novel clustering model, which exploits the non-negative feature property and incorporates the multi-view information into a unified joint learning framework.
We also explore, for the first time, the multi-model non-negative graph-based approach to clustering data based on deep features.
arXiv Detail & Related papers (2022-11-03T08:18:27Z) - Supervised Multivariate Learning with Simultaneous Feature Auto-grouping
and Dimension Reduction [7.093830786026851]
This paper proposes a novel clustered reduced-rank learning framework.
It imposes two joint matrix regularizations to automatically group the features in constructing predictive factors.
It is more interpretable than low-rank modeling and relaxes the stringent sparsity assumption in variable selection.
arXiv Detail & Related papers (2021-12-17T20:11:20Z) - Exploring Complementary Strengths of Invariant and Equivariant
Representations for Few-Shot Learning [96.75889543560497]
In many real-world problems, collecting a large number of labeled samples is infeasible.
Few-shot learning is the dominant approach to address this issue, where the objective is to quickly adapt to novel categories in presence of a limited number of samples.
We propose a novel training mechanism that simultaneously enforces equivariance and invariance to a general set of geometric transformations.
arXiv Detail & Related papers (2021-03-01T21:14:33Z) - Orthogonal Multi-view Analysis by Successive Approximations via
Eigenvectors [7.870955752916424]
The framework integrates the correlations within multiple views, supervised discriminant capacity, and distance preservation.
It not only includes several existing models as special cases, but also inspires new novel models.
Experiments are conducted on various real-world datasets for multi-view discriminant analysis and multi-view multi-label classification.
arXiv Detail & Related papers (2020-10-04T17:16:15Z) - Unsupervised Multi-view Clustering by Squeezing Hybrid Knowledge from
Cross View and Each View [68.88732535086338]
This paper proposes a new multi-view clustering method, low-rank subspace multi-view clustering based on adaptive graph regularization.
Experimental results for five widely used multi-view benchmarks show that our proposed algorithm surpasses other state-of-the-art methods by a clear margin.
arXiv Detail & Related papers (2020-08-23T08:25:06Z) - Embedded Deep Bilinear Interactive Information and Selective Fusion for
Multi-view Learning [70.67092105994598]
We propose a novel multi-view learning framework to make the multi-view classification better aimed at the above-mentioned two aspects.
In particular, we train different deep neural networks to learn various intra-view representations.
Experiments on six publicly available datasets demonstrate the effectiveness of the proposed method.
arXiv Detail & Related papers (2020-07-13T01:13:23Z) - Learning to Select Base Classes for Few-shot Classification [96.92372639495551]
We use the Similarity Ratio as an indicator for the generalization performance of a few-shot model.
We then formulate the base class selection problem as a submodular optimization problem over Similarity Ratio.
arXiv Detail & Related papers (2020-04-01T09:55:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.