Adversarial Canonical Correlation Analysis
- URL: http://arxiv.org/abs/2005.10349v2
- Date: Tue, 9 Jun 2020 21:31:21 GMT
- Title: Adversarial Canonical Correlation Analysis
- Authors: Benjamin Dutton
- Abstract summary: Canonical Correlation Analysis (CCA) is a technique used to extract common information from multiple data sources or views.
Recent work has given CCA probabilistic footing in a deep learning context.
Or, adversarial techniques have arisen as a powerful alternative to variational Bayesian methods in autoencoders.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Canonical Correlation Analysis (CCA) is a statistical technique used to
extract common information from multiple data sources or views. It has been
used in various representation learning problems, such as dimensionality
reduction, word embedding, and clustering. Recent work has given CCA
probabilistic footing in a deep learning context and uses a variational lower
bound for the data log likelihood to estimate model parameters. Alternatively,
adversarial techniques have arisen in recent years as a powerful alternative to
variational Bayesian methods in autoencoders. In this work, we explore
straightforward adversarial alternatives to recent work in Deep Variational CCA
(VCCA and VCCA-Private) we call ACCA and ACCA-Private and show how these
approaches offer a stronger and more flexible way to match the approximate
posteriors coming from encoders to much larger classes of priors than the VCCA
and VCCA-Private models. This allows new priors for what constitutes a good
representation, such as disentangling underlying factors of variation, to be
more directly pursued. We offer further analysis on the multi-level
disentangling properties of VCCA-Private and ACCA-Private through the use of a
newly designed dataset we call Tangled MNIST. We also design a validation
criteria for these models that is theoretically grounded, task-agnostic, and
works well in practice. Lastly, we fill a minor research gap by deriving an
additional variational lower bound for VCCA that allows the representation to
use view-specific information from both input views.
Related papers
- Preventing Model Collapse in Deep Canonical Correlation Analysis by Noise Regularization [10.216526425377149]
Multi-View Representation Learning (MVRL) aims to learn a unified representation of an object from multi-view data.
Deep Canonical Correlation Analysis (DCCA) and its variants share simple formulations and demonstrate state-of-the-art performance.
We observe the issue of model collapse, em i.e., the performance of DCCA-based methods will drop drastically when training proceeds.
We develop NR-DCCA, which is equipped with a novel noise regularization approach to prevent model collapse.
arXiv Detail & Related papers (2024-11-01T06:02:30Z) - Learning Feature Inversion for Multi-class Anomaly Detection under General-purpose COCO-AD Benchmark [101.23684938489413]
Anomaly detection (AD) is often focused on detecting anomalies for industrial quality inspection and medical lesion examination.
This work first constructs a large-scale and general-purpose COCO-AD dataset by extending COCO to the AD field.
Inspired by the metrics in the segmentation field, we propose several more practical threshold-dependent AD-specific metrics.
arXiv Detail & Related papers (2024-04-16T17:38:26Z) - Explaining Cross-Domain Recognition with Interpretable Deep Classifier [100.63114424262234]
Interpretable Deep (IDC) learns the nearest source samples of a target sample as evidence upon which the classifier makes the decision.
Our IDC leads to a more explainable model with almost no accuracy degradation and effectively calibrates classification for optimum reject options.
arXiv Detail & Related papers (2022-11-15T15:58:56Z) - Dynamically-Scaled Deep Canonical Correlation Analysis [77.34726150561087]
Canonical Correlation Analysis (CCA) is a method for feature extraction of two views by finding maximally correlated linear projections of them.
We introduce a novel dynamic scaling method for training an input-dependent canonical correlation model.
arXiv Detail & Related papers (2022-03-23T12:52:49Z) - Variational Interpretable Learning from Multi-view Data [2.687817337319978]
DICCA is designed to disentangle both the shared and view-specific variations for multi-view data.
Empirical results on real-world datasets show that our methods are competitive across domains.
arXiv Detail & Related papers (2022-02-28T01:56:44Z) - Regularizing Variational Autoencoder with Diversity and Uncertainty
Awareness [61.827054365139645]
Variational Autoencoder (VAE) approximates the posterior of latent variables based on amortized variational inference.
We propose an alternative model, DU-VAE, for learning a more Diverse and less Uncertain latent space.
arXiv Detail & Related papers (2021-10-24T07:58:13Z) - Revisiting Contrastive Methods for Unsupervised Learning of Visual
Representations [78.12377360145078]
Contrastive self-supervised learning has outperformed supervised pretraining on many downstream tasks like segmentation and object detection.
In this paper, we first study how biases in the dataset affect existing methods.
We show that current contrastive approaches work surprisingly well across: (i) object- versus scene-centric, (ii) uniform versus long-tailed and (iii) general versus domain-specific datasets.
arXiv Detail & Related papers (2021-06-10T17:59:13Z) - Context Decoupling Augmentation for Weakly Supervised Semantic
Segmentation [53.49821324597837]
Weakly supervised semantic segmentation is a challenging problem that has been deeply studied in recent years.
We present a Context Decoupling Augmentation ( CDA) method to change the inherent context in which the objects appear.
To validate the effectiveness of the proposed method, extensive experiments on PASCAL VOC 2012 dataset with several alternative network architectures demonstrate that CDA can boost various popular WSSS methods to the new state-of-the-art by a large margin.
arXiv Detail & Related papers (2021-03-02T15:05:09Z) - $\ell_0$-based Sparse Canonical Correlation Analysis [7.073210405344709]
Canonical Correlation Analysis (CCA) models are powerful for studying the associations between two sets of variables.
Despite their success, CCA models may break if the number of variables in either of the modalities exceeds the number of samples.
Here, we propose $ell_0$-CCA, a method for learning correlated representations based on sparse subsets of two observed modalities.
arXiv Detail & Related papers (2020-10-12T11:44:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.