PCA Reduced Gaussian Mixture Models with Applications in Superresolution
- URL: http://arxiv.org/abs/2009.07520v3
- Date: Thu, 6 May 2021 11:40:57 GMT
- Title: PCA Reduced Gaussian Mixture Models with Applications in Superresolution
- Authors: Johannes Hertrich, Dang Phoung Lan Nguyen, Jean-Fancois Aujol,
Dominique Bernard, Yannick Berthoumieu, Abdellatif Saadaldin, Gabriele Steidl
- Abstract summary: This paper provides a twofold contribution to the topic.
First, we propose a Gaussian Mixture Model in conjunction with a reduction of the dimensionality of the data in each component of the model.
Second, we apply our PCA-GMM for the superresolution of 2D and 3D material images based on the approach of Sandeep and Jacob.
- Score: 1.885698488789676
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite the rapid development of computational hardware, the treatment of
large and high dimensional data sets is still a challenging problem. This paper
provides a twofold contribution to the topic. First, we propose a Gaussian
Mixture Model in conjunction with a reduction of the dimensionality of the data
in each component of the model by principal component analysis, called PCA-GMM.
To learn the (low dimensional) parameters of the mixture model we propose an EM
algorithm whose M-step requires the solution of constrained optimization
problems. Fortunately, these constrained problems do not depend on the usually
large number of samples and can be solved efficiently by an (inertial) proximal
alternating linearized minimization algorithm. Second, we apply our PCA-GMM for
the superresolution of 2D and 3D material images based on the approach of
Sandeep and Jacob. Numerical results confirm the moderate influence of the
dimensionality reduction on the overall superresolution result.
Related papers
- Matrix Completion with Convex Optimization and Column Subset Selection [0.0]
We present two algorithms that implement our Columns Selected Matrix Completion (CSMC) method.
To study the influence of the matrix size, rank computation, and the proportion of missing elements on the quality of the solution and the time, we performed experiments on synthetic data.
Our thorough analysis shows that CSMC provides solutions of comparable quality to matrix completion algorithms, which are based on convex optimization.
arXiv Detail & Related papers (2024-03-04T10:36:06Z) - Data-freeWeight Compress and Denoise for Large Language Models [101.53420111286952]
We propose a novel approach termed Data-free Joint Rank-k Approximation for compressing the parameter matrices.
We achieve a model pruning of 80% parameters while retaining 93.43% of the original performance without any calibration data.
arXiv Detail & Related papers (2024-02-26T05:51:47Z) - Learning Large-Scale MTP$_2$ Gaussian Graphical Models via Bridge-Block
Decomposition [15.322124183968635]
We show that the entire problem can be equivalently optimized through several smaller-scaled sub-problems.
From practical aspect, this simple and provable discipline can be applied to break down a large problem into small tractable ones.
arXiv Detail & Related papers (2023-09-23T15:30:34Z) - Large-scale gradient-based training of Mixtures of Factor Analyzers [67.21722742907981]
This article contributes both a theoretical analysis as well as a new method for efficient high-dimensional training by gradient descent.
We prove that MFA training and inference/sampling can be performed based on precision matrices, which does not require matrix inversions after training is completed.
Besides the theoretical analysis and matrices, we apply MFA to typical image datasets such as SVHN and MNIST, and demonstrate the ability to perform sample generation and outlier detection.
arXiv Detail & Related papers (2023-08-26T06:12:33Z) - SIGMA: Scale-Invariant Global Sparse Shape Matching [50.385414715675076]
We propose a novel mixed-integer programming (MIP) formulation for generating precise sparse correspondences for non-rigid shapes.
We show state-of-the-art results for sparse non-rigid matching on several challenging 3D datasets.
arXiv Detail & Related papers (2023-08-16T14:25:30Z) - Efficient Algorithms for Sparse Moment Problems without Separation [6.732901486505047]
We consider the sparse moment problem of learning a $k$-spike mixture in high-dimensional space from noisy moment information.
Previous algorithms either assume certain separation assumptions, use more recovery moments, or run in (super) exponential time.
Our algorithm for the high-dimensional case determines the coordinates of each spike by aligning a 1d projection of the mixture to a random vector and a set of 2d projections of the mixture.
arXiv Detail & Related papers (2022-07-26T16:17:32Z) - Robust Multi-view Registration of Point Sets with Laplacian Mixture
Model [25.865100974015412]
We propose a novel probabilistic generative method to align multiple point sets based on the heavy-tailed Laplacian distribution.
We demonstrate the advantages of our method by comparing it with representative state-of-the-art approaches on benchmark challenging data sets.
arXiv Detail & Related papers (2021-10-26T14:49:09Z) - Analysis of Truncated Orthogonal Iteration for Sparse Eigenvector
Problems [78.95866278697777]
We propose two variants of the Truncated Orthogonal Iteration to compute multiple leading eigenvectors with sparsity constraints simultaneously.
We then apply our algorithms to solve the sparse principle component analysis problem for a wide range of test datasets.
arXiv Detail & Related papers (2021-03-24T23:11:32Z) - Follow the bisector: a simple method for multi-objective optimization [65.83318707752385]
We consider optimization problems, where multiple differentiable losses have to be minimized.
The presented method computes descent direction in every iteration to guarantee equal relative decrease of objective functions.
arXiv Detail & Related papers (2020-07-14T09:50:33Z) - Effective Dimension Adaptive Sketching Methods for Faster Regularized
Least-Squares Optimization [56.05635751529922]
We propose a new randomized algorithm for solving L2-regularized least-squares problems based on sketching.
We consider two of the most popular random embeddings, namely, Gaussian embeddings and the Subsampled Randomized Hadamard Transform (SRHT)
arXiv Detail & Related papers (2020-06-10T15:00:09Z) - Unsupervised Discretization by Two-dimensional MDL-based Histogram [0.0]
Unsupervised discretization is a crucial step in many knowledge discovery tasks.
We propose an expressive model class that allows for far more flexible partitions of two-dimensional data.
We introduce a algorithm, named PALM, which Partitions each dimension ALternately and then Merges neighboring regions.
arXiv Detail & Related papers (2020-06-02T19:19:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.