On Robust Probabilistic Principal Component Analysis using Multivariate
$t$-Distributions
- URL: http://arxiv.org/abs/2010.10786v2
- Date: Mon, 3 Jan 2022 00:42:02 GMT
- Title: On Robust Probabilistic Principal Component Analysis using Multivariate
$t$-Distributions
- Authors: Yiping Guo and Howard D. Bondell
- Abstract summary: We present two sets of equivalent relationships between the high-level multivariate $t$-PPCA framework and the hierarchical model used for implementation.
We also propose a novel Monte Carlo expectation-maximization algorithm to implement one general type of such models.
- Score: 0.30458514384586394
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Probabilistic principal component analysis (PPCA) is a probabilistic
reformulation of principal component analysis (PCA), under the framework of a
Gaussian latent variable model. To improve the robustness of PPCA, it has been
proposed to change the underlying Gaussian distributions to multivariate
$t$-distributions. Based on the representation of $t$-distribution as a scale
mixture of Gaussian distributions, a hierarchical model is used for
implementation. However, in the existing literature, the hierarchical model
implemented does not yield the equivalent interpretation.
In this paper, we present two sets of equivalent relationships between the
high-level multivariate $t$-PPCA framework and the hierarchical model used for
implementation. In doing so, we clarify a current misrepresentation in the
literature, by specifying the correct correspondence. In addition, we discuss
the performance of different multivariate $t$ robust PPCA methods both in
theory and simulation studies, and propose a novel Monte Carlo
expectation-maximization (MCEM) algorithm to implement one general type of such
models.
Related papers
- Variational Autoencoders for Efficient Simulation-Based Inference [0.3495246564946556]
We present a generative modeling approach based on the variational inference framework for likelihood-free simulation-based inference.
We demonstrate the efficacy of these models on well-established benchmark problems, achieving results comparable to flow-based approaches.
arXiv Detail & Related papers (2024-11-21T12:24:13Z) - An Interpretable Evaluation of Entropy-based Novelty of Generative Models [36.29214321258605]
We propose a Kernel-based Entropic Novelty (KEN) score to quantify the mode-based novelty of generative models.
We present numerical results on synthetic and real image datasets, indicating the framework's effectiveness in detecting novel modes.
arXiv Detail & Related papers (2024-02-27T08:00:52Z) - Sample Complexity Characterization for Linear Contextual MDPs [67.79455646673762]
Contextual decision processes (CMDPs) describe a class of reinforcement learning problems in which the transition kernels and reward functions can change over time with different MDPs indexed by a context variable.
CMDPs serve as an important framework to model many real-world applications with time-varying environments.
We study CMDPs under two linear function approximation models: Model I with context-varying representations and common linear weights for all contexts; and Model II with common representations for all contexts and context-varying linear weights.
arXiv Detail & Related papers (2024-02-05T03:25:04Z) - A multilevel reinforcement learning framework for PDE based control [0.2538209532048867]
Reinforcement learning (RL) is a promising method to solve control problems.
Model-free RL algorithms are sample inefficient and require thousands if not millions of samples to learn optimal control policies.
We propose a multilevel RL framework in order to ease this cost by exploiting sublevel models that correspond to coarser scale discretization.
arXiv Detail & Related papers (2022-10-15T23:52:48Z) - A Unified Framework for Multi-distribution Density Ratio Estimation [101.67420298343512]
Binary density ratio estimation (DRE) provides the foundation for many state-of-the-art machine learning algorithms.
We develop a general framework from the perspective of Bregman minimization divergence.
We show that our framework leads to methods that strictly generalize their counterparts in binary DRE.
arXiv Detail & Related papers (2021-12-07T01:23:20Z) - Learning Gaussian Mixtures with Generalised Linear Models: Precise
Asymptotics in High-dimensions [79.35722941720734]
Generalised linear models for multi-class classification problems are one of the fundamental building blocks of modern machine learning tasks.
We prove exacts characterising the estimator in high-dimensions via empirical risk minimisation.
We discuss how our theory can be applied beyond the scope of synthetic data.
arXiv Detail & Related papers (2021-06-07T16:53:56Z) - Improving the Reconstruction of Disentangled Representation Learners via Multi-Stage Modeling [54.94763543386523]
Current autoencoder-based disentangled representation learning methods achieve disentanglement by penalizing the ( aggregate) posterior to encourage statistical independence of the latent factors.
We present a novel multi-stage modeling approach where the disentangled factors are first learned using a penalty-based disentangled representation learning method.
Then, the low-quality reconstruction is improved with another deep generative model that is trained to model the missing correlated latent variables.
arXiv Detail & Related papers (2020-10-25T18:51:15Z) - Probabilistic Circuits for Variational Inference in Discrete Graphical
Models [101.28528515775842]
Inference in discrete graphical models with variational methods is difficult.
Many sampling-based methods have been proposed for estimating Evidence Lower Bound (ELBO)
We propose a new approach that leverages the tractability of probabilistic circuit models, such as Sum Product Networks (SPN)
We show that selective-SPNs are suitable as an expressive variational distribution, and prove that when the log-density of the target model is aweighted the corresponding ELBO can be computed analytically.
arXiv Detail & Related papers (2020-10-22T05:04:38Z) - Variational Filtering with Copula Models for SLAM [5.242618356321224]
We show how it is possible to perform simultaneous localization and mapping (SLAM) with a larger class of distributions.
We integrate the distribution model with copulas into a Sequential Monte Carlo estimator and show how unknown model parameters can be learned through gradient-based optimization.
arXiv Detail & Related papers (2020-08-02T15:38:23Z) - Decision-Making with Auto-Encoding Variational Bayes [71.44735417472043]
We show that a posterior approximation distinct from the variational distribution should be used for making decisions.
Motivated by these theoretical results, we propose learning several approximate proposals for the best model.
In addition to toy examples, we present a full-fledged case study of single-cell RNA sequencing.
arXiv Detail & Related papers (2020-02-17T19:23:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.