Overcomplete order-3 tensor decomposition, blind deconvolution and
Gaussian mixture models
- URL: http://arxiv.org/abs/2007.08133v2
- Date: Sat, 20 Feb 2021 00:58:35 GMT
- Title: Overcomplete order-3 tensor decomposition, blind deconvolution and
Gaussian mixture models
- Authors: Haolin Chen, Luis Rademacher
- Abstract summary: We propose a new algorithm for tensor decomposition, based on Jennrich's algorithm, and apply our new algorithmic ideas to blind deconvolution and Gaussian mixture models.
Our first contribution is a simple and efficient algorithm to decompose certain symmetric overcomplete order-3 tensors, that is, three dimensional arrays of the form $T = sum_i=1n a_i otimes a_i otimes a_i$ where the $a_i$s are not linearly independent.
Our second contribution builds on top of our tensor decomposition algorithm to expand the family of
- Score: 1.7970523486905976
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a new algorithm for tensor decomposition, based on Jennrich's
algorithm, and apply our new algorithmic ideas to blind deconvolution and
Gaussian mixture models. Our first contribution is a simple and efficient
algorithm to decompose certain symmetric overcomplete order-3 tensors, that is,
three dimensional arrays of the form $T = \sum_{i=1}^n a_i \otimes a_i \otimes
a_i$ where the $a_i$s are not linearly independent.Our algorithm comes with a
detailed robustness analysis. Our second contribution builds on top of our
tensor decomposition algorithm to expand the family of Gaussian mixture models
whose parameters can be estimated efficiently. These ideas are also presented
in a more general framework of blind deconvolution that makes them applicable
to mixture models of identical but very general distributions, including all
centrally symmetric distributions with finite 6th moment.
Related papers
- Learning general Gaussian mixtures with efficient score matching [16.06356123715737]
We study the problem of learning mixtures of $k$ Gaussians in $d$ dimensions.
We make no separation assumptions on the underlying mixture components.
We give an algorithm that draws $dmathrmpoly(k/varepsilon)$ samples from the target mixture, runs in sample-polynomial time, and constructs a sampler.
arXiv Detail & Related papers (2024-04-29T17:30:36Z) - Learning Mixtures of Gaussians Using Diffusion Models [9.118706387430883]
We give a new algorithm for learning mixtures of $k$ Gaussians to TV error.
Our approach is analytic and relies on the framework of diffusion models.
arXiv Detail & Related papers (2024-04-29T17:00:20Z) - ADMM-MM Algorithm for General Tensor Decomposition [7.0326155922512275]
The proposed algorithm supports three basic loss functions ($ell$-loss, $ell$-loss and KL divergence) and various low-rank tensor decomposition models (CP, Tucker, TT, and TR decompositions)
We show that wide-range applications can be solved by the proposed algorithm, and can be easily extended to any established tensor decomposition models in a plug-and-play manner.
arXiv Detail & Related papers (2023-12-19T00:17:34Z) - Private estimation algorithms for stochastic block models and mixture
models [63.07482515700984]
General tools for designing efficient private estimation algorithms.
First efficient $(epsilon, delta)$-differentially private algorithm for both weak recovery and exact recovery.
arXiv Detail & Related papers (2023-01-11T09:12:28Z) - Scalable First-Order Bayesian Optimization via Structured Automatic
Differentiation [4.061135251278187]
We show that a wide range of kernels gives rise to structured matrices, enabling an exact $mathcalO(n2d)$ matrix-vector multiply for gradient observations and $mathcalO(n2d2)$ for Hessian observations.
Our methods apply to virtually all canonical kernels and automatically extend to complex kernels, like the neural network, radial basis function network, and spectral mixture kernels.
arXiv Detail & Related papers (2022-06-16T17:59:48Z) - Gaussian mixture model on nodes of Bayesian network given maximal
parental cliques [0.0]
We will explain why and how we use Gaussian mixture models in network.
We propose a new method, called double iteration algorithm, to optimize the mixture model.
arXiv Detail & Related papers (2022-04-20T15:14:01Z) - Sparse Quadratic Optimisation over the Stiefel Manifold with Application
to Permutation Synchronisation [71.27989298860481]
We address the non- optimisation problem of finding a matrix on the Stiefel manifold that maximises a quadratic objective function.
We propose a simple yet effective sparsity-promoting algorithm for finding the dominant eigenspace matrix.
arXiv Detail & Related papers (2021-09-30T19:17:35Z) - Spectral clustering under degree heterogeneity: a case for the random
walk Laplacian [83.79286663107845]
This paper shows that graph spectral embedding using the random walk Laplacian produces vector representations which are completely corrected for node degree.
In the special case of a degree-corrected block model, the embedding concentrates about K distinct points, representing communities.
arXiv Detail & Related papers (2021-05-03T16:36:27Z) - Stochastic Flows and Geometric Optimization on the Orthogonal Group [52.50121190744979]
We present a new class of geometrically-driven optimization algorithms on the orthogonal group $O(d)$.
We show that our methods can be applied in various fields of machine learning including deep, convolutional and recurrent neural networks, reinforcement learning, flows and metric learning.
arXiv Detail & Related papers (2020-03-30T15:37:50Z) - Learning Gaussian Graphical Models via Multiplicative Weights [54.252053139374205]
We adapt an algorithm of Klivans and Meka based on the method of multiplicative weight updates.
The algorithm enjoys a sample complexity bound that is qualitatively similar to others in the literature.
It has a low runtime $O(mp2)$ in the case of $m$ samples and $p$ nodes, and can trivially be implemented in an online manner.
arXiv Detail & Related papers (2020-02-20T10:50:58Z) - Optimal Iterative Sketching with the Subsampled Randomized Hadamard
Transform [64.90148466525754]
We study the performance of iterative sketching for least-squares problems.
We show that the convergence rate for Haar and randomized Hadamard matrices are identical, andally improve upon random projections.
These techniques may be applied to other algorithms that employ randomized dimension reduction.
arXiv Detail & Related papers (2020-02-03T16:17:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.