A Non-commutative Extension of Lee-Seung's Algorithm for Positive
Semidefinite Factorizations
- URL: http://arxiv.org/abs/2106.00293v1
- Date: Tue, 1 Jun 2021 07:55:09 GMT
- Title: A Non-commutative Extension of Lee-Seung's Algorithm for Positive
Semidefinite Factorizations
- Authors: Yong Sheng Soh, Antonios Varvitsiotis
- Abstract summary: We describe a non-commutative extension of Lee-Seung's algorithm, which we call the Matrix Multiplicative Update (MMU) algorithm, for computing Positive Semidefinite (PSD) factorizations.
We show that under our update scheme the squared loss objective is non-increasing and fixed points correspond to critical points.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Given a matrix $X\in \mathbb{R}_+^{m\times n}$ with nonnegative entries, a
Positive Semidefinite (PSD) factorization of $X$ is a collection of $r \times
r$-dimensional PSD matrices $\{A_i\}$ and $\{B_j\}$ satisfying $X_{ij}=
\mathrm{tr}(A_i B_j)$ for all $\ i\in [m],\ j\in [n]$. PSD factorizations are
fundamentally linked to understanding the expressiveness of semidefinite
programs as well as the power and limitations of quantum resources in
information theory. The PSD factorization task generalizes the Non-negative
Matrix Factorization (NMF) problem where we seek a collection of
$r$-dimensional nonnegative vectors $\{a_i\}$ and $\{b_j\}$ satisfying $X_{ij}=
a_i^\top b_j$, for all $i\in [m],\ j\in [n]$ -- one can recover the latter
problem by choosing matrices in the PSD factorization to be diagonal. The most
widely used algorithm for computing NMFs of a matrix is the Multiplicative
Update algorithm developed by Lee and Seung, in which nonnegativity of the
updates is preserved by scaling with positive diagonal matrices. In this paper,
we describe a non-commutative extension of Lee-Seung's algorithm, which we call
the Matrix Multiplicative Update (MMU) algorithm, for computing PSD
factorizations. The MMU algorithm ensures that updates remain PSD by congruence
scaling with the matrix geometric mean of appropriate PSD matrices, and it
retains the simplicity of implementation that Lee-Seung's algorithm enjoys.
Building on the Majorization-Minimization framework, we show that under our
update scheme the squared loss objective is non-increasing and fixed points
correspond to critical points. The analysis relies on Lieb's Concavity Theorem.
Beyond PSD factorizations, we use the MMU algorithm as a primitive to calculate
block-diagonal PSD factorizations and tensor PSD factorizations. We demonstrate
the utility of our method with experiments on real and synthetic data.
Related papers
- Majorization-minimization for Sparse Nonnegative Matrix Factorization
with the $\beta$-divergence [2.3787352248749376]
It is well known that the norm of the other factor (the dictionary matrix) needs to be controlled in order to avoid an ill-posed formulation.
Standard practice consists in constraining the columns of the dictionary to have unit norm, which leads to a nontrivial optimization problem.
We derive block-descent majorization-minimization algorithms that result in simple multiplicative updates for either $ell_1$-regularization or the more "aggressive" log-regularization.
arXiv Detail & Related papers (2022-07-13T16:09:29Z) - Sublinear Time Approximation of Text Similarity Matrices [50.73398637380375]
We introduce a generalization of the popular Nystr"om method to the indefinite setting.
Our algorithm can be applied to any similarity matrix and runs in sublinear time in the size of the matrix.
We show that our method, along with a simple variant of CUR decomposition, performs very well in approximating a variety of similarity matrices.
arXiv Detail & Related papers (2021-12-17T17:04:34Z) - Multiplicative updates for symmetric-cone factorizations [0.0]
We introduce and analyze the symmetric-cone multiplicative update (SCMU) algorithm for computing cone factorizations.
We show that the squared loss objective is non-decreasing along the trajectories of the SCMU algorithm.
arXiv Detail & Related papers (2021-08-02T09:23:39Z) - Non-PSD Matrix Sketching with Applications to Regression and
Optimization [56.730993511802865]
We present dimensionality reduction methods for non-PSD and square-roots" matrices.
We show how these techniques can be used for multiple downstream tasks.
arXiv Detail & Related papers (2021-06-16T04:07:48Z) - Householder Dice: A Matrix-Free Algorithm for Simulating Dynamics on
Gaussian and Random Orthogonal Ensembles [12.005731086591139]
Householder Dice (HD) is an algorithm for simulating dynamics on dense random matrix ensembles with translation-invariant properties.
The memory and costs of the HD algorithm are $mathcalO(nT)$ and $mathcalO(nT2)$, respectively.
Numerical results demonstrate the promise of the HD algorithm as a new computational tool in the study of high-dimensional random systems.
arXiv Detail & Related papers (2021-01-19T04:50:53Z) - Quantum algorithms for spectral sums [50.045011844765185]
We propose new quantum algorithms for estimating spectral sums of positive semi-definite (PSD) matrices.
We show how the algorithms and techniques used in this work can be applied to three problems in spectral graph theory.
arXiv Detail & Related papers (2020-11-12T16:29:45Z) - Hutch++: Optimal Stochastic Trace Estimation [75.45968495410048]
We introduce a new randomized algorithm, Hutch++, which computes a $(1 pm epsilon)$ approximation to $tr(A)$ for any positive semidefinite (PSD) $A$.
We show that it significantly outperforms Hutchinson's method in experiments.
arXiv Detail & Related papers (2020-10-19T16:45:37Z) - Positive Semidefinite Matrix Factorization: A Connection with Phase
Retrieval and Affine Rank Minimization [71.57324258813674]
We show that PSDMF algorithms can be designed based on phase retrieval (PR) and affine rank minimization (ARM) algorithms.
Motivated by this idea, we introduce a new family of PSDMF algorithms based on iterative hard thresholding (IHT)
arXiv Detail & Related papers (2020-07-24T06:10:19Z) - Provably Efficient Reinforcement Learning for Discounted MDPs with
Feature Mapping [99.59319332864129]
In this paper, we study reinforcement learning for discounted Decision (MDP)
We propose a novel algorithm that makes use of the feature mapping and obtains a $tilde O(dsqrtT/ (1-gamma)2)$ regret.
Our upper and lower bound results together suggest that the proposed reinforcement learning algorithm is near-optimal up to a $ (1-gamma)-0.5$ factor.
arXiv Detail & Related papers (2020-06-23T17:08:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.