A Bayesian Perspective for Determinant Minimization Based Robust
Structured Matrix Factorizatio
- URL: http://arxiv.org/abs/2302.08416v1
- Date: Thu, 16 Feb 2023 16:48:41 GMT
- Title: A Bayesian Perspective for Determinant Minimization Based Robust
Structured Matrix Factorizatio
- Authors: Gokcan Tatli and Alper T. Erdogan
- Abstract summary: We introduce a Bayesian perspective for the structured matrix factorization problem.
We show that the corresponding maximum a posteriori estimation problem boils down to the robust determinant approach for structured matrix factorization.
- Score: 10.355894890759377
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce a Bayesian perspective for the structured matrix factorization
problem. The proposed framework provides a probabilistic interpretation for
existing geometric methods based on determinant minimization. We model input
data vectors as linear transformations of latent vectors drawn from a
distribution uniform over a particular domain reflecting structural
assumptions, such as the probability simplex in Nonnegative Matrix
Factorization and polytopes in Polytopic Matrix Factorization. We represent the
rows of the linear transformation matrix as vectors generated independently
from a normal distribution whose covariance matrix is inverse Wishart
distributed. We show that the corresponding maximum a posteriori estimation
problem boils down to the robust determinant minimization approach for
structured matrix factorization, providing insights about parameter selections
and potential algorithmic extensions.
Related papers
- Induced Covariance for Causal Discovery in Linear Sparse Structures [55.2480439325792]
Causal models seek to unravel the cause-effect relationships among variables from observed data.
This paper introduces a novel causal discovery algorithm designed for settings in which variables exhibit linearly sparse relationships.
arXiv Detail & Related papers (2024-10-02T04:01:38Z) - Fitting Multilevel Factor Models [41.38783926370621]
We develop a novel, fast implementation of the expectation-maximization algorithm, tailored for multilevel factor models.
We show that the inverse of an invertible PSD MLR matrix is also an MLR matrix with the same sparsity in factors.
We present an algorithm that computes the Cholesky factorization of an expanded matrix with linear time and space complexities.
arXiv Detail & Related papers (2024-09-18T15:39:12Z) - Accelerated structured matrix factorization [0.0]
Matrix factorization exploits the idea that, in complex high-dimensional data, the actual signal typically lies in lower-dimensional structures.
By exploiting Bayesian shrinkage priors, we devise a computationally convenient approach for high-dimensional matrix factorization.
The dependence between row and column entities is modeled by inducing flexible sparse patterns within factors.
arXiv Detail & Related papers (2022-12-13T11:35:01Z) - Learning Graphical Factor Models with Riemannian Optimization [70.13748170371889]
This paper proposes a flexible algorithmic framework for graph learning under low-rank structural constraints.
The problem is expressed as penalized maximum likelihood estimation of an elliptical distribution.
We leverage geometries of positive definite matrices and positive semi-definite matrices of fixed rank that are well suited to elliptical models.
arXiv Detail & Related papers (2022-10-21T13:19:45Z) - On confidence intervals for precision matrices and the
eigendecomposition of covariance matrices [20.20416580970697]
This paper tackles the challenge of computing confidence bounds on the individual entries of eigenvectors of a covariance matrix of fixed dimension.
We derive a method to bound the entries of the inverse covariance matrix, the so-called precision matrix.
As an application of these results, we demonstrate a new statistical test, which allows us to test for non-zero values of the precision matrix.
arXiv Detail & Related papers (2022-08-25T10:12:53Z) - Polytopic Matrix Factorization: Determinant Maximization Based Criterion
and Identifiability [10.355894890759377]
We introduce Polytopic Matrix Factorization (PMF) as a novel data decomposition approach.
The choice of polytope reflects the presumed features of the latent components and their mutual relationships.
Having infinitely many polytope choices provides a form of flexibility in characterizing latent vectors.
arXiv Detail & Related papers (2022-02-19T16:49:24Z) - Identifiability in Exact Two-Layer Sparse Matrix Factorization [0.0]
Sparse matrix factorization is the problem of approximating a matrix Z by a product of L sparse factors X(L) X(L--1).
This paper focuses on identifiability issues that appear in this problem, in view of better understanding under which sparsity constraints the problem is well-posed.
arXiv Detail & Related papers (2021-10-04T07:56:37Z) - Adversarially-Trained Nonnegative Matrix Factorization [77.34726150561087]
We consider an adversarially-trained version of the nonnegative matrix factorization.
In our formulation, an attacker adds an arbitrary matrix of bounded norm to the given data matrix.
We design efficient algorithms inspired by adversarial training to optimize for dictionary and coefficient matrices.
arXiv Detail & Related papers (2021-04-10T13:13:17Z) - Joint Network Topology Inference via Structured Fusion Regularization [70.30364652829164]
Joint network topology inference represents a canonical problem of learning multiple graph Laplacian matrices from heterogeneous graph signals.
We propose a general graph estimator based on a novel structured fusion regularization.
We show that the proposed graph estimator enjoys both high computational efficiency and rigorous theoretical guarantee.
arXiv Detail & Related papers (2021-03-05T04:42:32Z) - Understanding Implicit Regularization in Over-Parameterized Single Index
Model [55.41685740015095]
We design regularization-free algorithms for the high-dimensional single index model.
We provide theoretical guarantees for the induced implicit regularization phenomenon.
arXiv Detail & Related papers (2020-07-16T13:27:47Z) - Eigendecomposition-Free Training of Deep Networks for Linear
Least-Square Problems [107.3868459697569]
We introduce an eigendecomposition-free approach to training a deep network.
We show that our approach is much more robust than explicit differentiation of the eigendecomposition.
Our method has better convergence properties and yields state-of-the-art results.
arXiv Detail & Related papers (2020-04-15T04:29:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.