Bauer's Spectral Factorization Method for Low Order Multiwavelet Filter
Design
- URL: http://arxiv.org/abs/2312.05418v1
- Date: Sat, 9 Dec 2023 00:26:52 GMT
- Title: Bauer's Spectral Factorization Method for Low Order Multiwavelet Filter
Design
- Authors: Vasil Kolev, Todor Cooklev, Fritz Keinert
- Abstract summary: We introduce a fast method for matrix spectral factorization based on Bauer$'$s method.
We convert Bauer's method into a nonlinear matrix equation (NME)
The NME is solved by two different numerical algorithms.
- Score: 0.6138671548064355
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Para-Hermitian polynomial matrices obtained by matrix spectral factorization
lead to functions useful in control theory systems, basis functions in
numerical methods or multiscaling functions used in signal processing. We
introduce a fast algorithm for matrix spectral factorization based on Bauer$'$s
method. We convert Bauer$'$ method into a nonlinear matrix equation (NME). The
NME is solved by two different numerical algorithms (Fixed Point Iteration and
Newton$'$s Method) which produce approximate scalar or matrix factors, as well
as a symbolic algorithm which produces exact factors in closed form for some
low-order scalar or matrix polynomial matrices, respectively. Convergence rates
of the two numerical algorithms are investigated for a number of singular and
nonsingular scalar and matrix polynomials taken from different areas. In
particular, one of the singular examples leads to new orthogonal multiscaling
and multiwavelet filters. Since the NME can also be solved as a Generalized
Discrete Time Algebraic Riccati Equation (GDARE), numerical results using
built-in routines in Maple 17.0 and 6 Matlab versions are presented.
Related papers
- Automatic Solver Generator for Systems of Laurent Polynomial Equations [1.7188280334580197]
Given a family of triangulation (Laurent) systems with the same monomial structure but varying coefficients, find a solver that computes solutions for any family as fast as possible.
We propose an automatic solver generator for systems of Laurent equations.
arXiv Detail & Related papers (2023-07-01T12:12:52Z) - Fast Differentiable Matrix Square Root and Inverse Square Root [65.67315418971688]
We propose two more efficient variants to compute the differentiable matrix square root and the inverse square root.
For the forward propagation, one method is to use Matrix Taylor Polynomial (MTP), and the other method is to use Matrix Pad'e Approximants (MPA)
A series of numerical tests show that both methods yield considerable speed-up compared with the SVD or the NS iteration.
arXiv Detail & Related papers (2022-01-29T10:00:35Z) - Fast Differentiable Matrix Square Root [65.67315418971688]
We propose two more efficient variants to compute the differentiable matrix square root.
For the forward propagation, one method is to use Matrix Taylor Polynomial (MTP)
The other method is to use Matrix Pad'e Approximants (MPA)
arXiv Detail & Related papers (2022-01-21T12:18:06Z) - Weighted Low Rank Matrix Approximation and Acceleration [0.5177947445379687]
Low-rank matrix approximation is one of the central concepts in machine learning.
Low-rank matrix completion (LRMC) solves the LRMA problem when some observations are missing.
We propose an algorithm for solving the weighted problem, as well as two acceleration techniques.
arXiv Detail & Related papers (2021-09-22T22:03:48Z) - Sparse Factorization of Large Square Matrices [10.94053598642913]
In this paper, we propose to approximate a large square matrix with a product of sparse full-rank matrices.
In the approximation, our method needs only $N(log N)2$ non-zero numbers for an $Ntimes N$ full matrix.
We show that our method gives a better approximation when the approximated matrix is sparse and high-rank.
arXiv Detail & Related papers (2021-09-16T18:42:21Z) - Non-PSD Matrix Sketching with Applications to Regression and
Optimization [56.730993511802865]
We present dimensionality reduction methods for non-PSD and square-roots" matrices.
We show how these techniques can be used for multiple downstream tasks.
arXiv Detail & Related papers (2021-06-16T04:07:48Z) - Quantum algorithms for spectral sums [50.045011844765185]
We propose new quantum algorithms for estimating spectral sums of positive semi-definite (PSD) matrices.
We show how the algorithms and techniques used in this work can be applied to three problems in spectral graph theory.
arXiv Detail & Related papers (2020-11-12T16:29:45Z) - Linear-Sample Learning of Low-Rank Distributions [56.59844655107251]
We show that learning $ktimes k$, rank-$r$, matrices to normalized $L_1$ distance requires $Omega(frackrepsilon2)$ samples.
We propose an algorithm that uses $cal O(frackrepsilon2log2fracepsilon)$ samples, a number linear in the high dimension, and nearly linear in the matrices, typically low, rank proofs.
arXiv Detail & Related papers (2020-09-30T19:10:32Z) - A Block Coordinate Descent-based Projected Gradient Algorithm for
Orthogonal Non-negative Matrix Factorization [0.0]
This article utilizes the projected gradient method (PG) for a non-negative matrix factorization problem (NMF)
We penalise the orthonormality constraints and apply the PG method via a block coordinate descent approach.
arXiv Detail & Related papers (2020-03-23T13:24:43Z) - Optimal Iterative Sketching with the Subsampled Randomized Hadamard
Transform [64.90148466525754]
We study the performance of iterative sketching for least-squares problems.
We show that the convergence rate for Haar and randomized Hadamard matrices are identical, andally improve upon random projections.
These techniques may be applied to other algorithms that employ randomized dimension reduction.
arXiv Detail & Related papers (2020-02-03T16:17:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.