A Two Stage Generalized Block Orthogonal Matching Pursuit (TSGBOMP)
Algorithm
- URL: http://arxiv.org/abs/2008.08031v1
- Date: Tue, 18 Aug 2020 17:00:55 GMT
- Title: A Two Stage Generalized Block Orthogonal Matching Pursuit (TSGBOMP)
Algorithm
- Authors: Samrat Mukhopadhyay, and Mrityunjoy Chakraborty
- Abstract summary: Recovery of an unknown sparse signal from a few of its projections is the key objective of compressed sensing.
Existing block sparse recovery algorithms like BOMP make the assumption of uniform block size and known block boundaries.
This paper proposes a two step procedure, where the first stage is a coarse block location identification stage.
The second stage carries out finer localization of a non-zero cluster within the window selected in the first stage.
- Score: 0.3867363075280543
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recovery of an unknown sparse signal from a few of its projections is the key
objective of compressed sensing. Often one comes across signals that are not
ordinarily sparse but are sparse blockwise. Existing block sparse recovery
algorithms like BOMP make the assumption of uniform block size and known block
boundaries, which are, however, not very practical in many applications. This
paper addresses this problem and proposes a two step procedure, where the first
stage is a coarse block location identification stage while the second stage
carries out finer localization of a non-zero cluster within the window selected
in the first stage. A detailed convergence analysis of the proposed algorithm
is carried out by first defining the so-called pseudoblock-interleaved block
RIP of the given generalized block sparse signal and then imposing upper bounds
on the corresponding RIC. We also extend the analysis for complex vector as
well as matrix entries where it turns out that the extension is non-trivial and
requires special care. Furthermore, assuming real Gaussian sensing matrix
entries, we find a lower bound on the probability that the derived recovery
bounds are satisfied. The lower bound suggests that there are sets of
parameters such that the derived bound is satisfied with high probability.
Simulation results confirm significantly improved performance of the proposed
algorithm as compared to BOMP.
Related papers
- A Sample Efficient Alternating Minimization-based Algorithm For Robust Phase Retrieval [56.67706781191521]
In this work, we present a robust phase retrieval problem where the task is to recover an unknown signal.
Our proposed oracle avoids the need for computationally spectral descent, using a simple gradient step and outliers.
arXiv Detail & Related papers (2024-09-07T06:37:23Z) - Group Projected Subspace Pursuit for Block Sparse Signal Reconstruction: Convergence Analysis and Applications [4.297249011611168]
We present a convergence analysis of the Group Projected Subspace Pursuit (GPSP) algorithm.
GPSP recovers the true block sparse signals when the observations are noisy.
We find that GPSP outperforms other algorithms in most cases for various levels of block sparsity and block sizes.
arXiv Detail & Related papers (2024-06-01T08:48:06Z) - Convergence and complexity of block majorization-minimization for constrained block-Riemannian optimization [18.425648833592312]
Blockization-minimization (BMM) is a simple iterative gradient for non-exhaustive subspace estimation.
Our analysis makes explicit use of Euclidian constraints.
arXiv Detail & Related papers (2023-12-16T05:40:19Z) - An Optimal Algorithm for the Real-Valued Combinatorial Pure Exploration
of Multi-Armed Bandit [65.268245109828]
We study the real-valued pure exploration problem in the multi-armed bandit (R-CPE-MAB)
Existing methods in the R-CPE-MAB can be seen as a special case of the so-called transductive linear bandits.
We propose an algorithm named the gap-based exploration (CombGapE) algorithm, whose sample complexity matches the lower bound.
arXiv Detail & Related papers (2023-06-15T15:37:31Z) - Complexity of Block Coordinate Descent with Proximal Regularization and
Applications to Wasserstein CP-dictionary Learning [1.4010916616909743]
We consider the block coordinate descent of Gauss-Sdel type with regularization (BCD-PR)
We show that an W(W(W(W(W(W(W(W(W(W(W(W(W(W(W(W(W(W(W(W(W(W(W(W(W(W(W(W(W(W(W(W(
arXiv Detail & Related papers (2023-06-04T17:52:49Z) - Blockwise Stochastic Variance-Reduced Methods with Parallel Speedup for
Multi-Block Bilevel Optimization [43.74656748515853]
Non-stationary multi-block bilevel optimization problems involve $mgg 1$ lower level problems and have important applications in machine learning.
We aim to achieve three properties for our algorithm: a) matching the state-of-the-art complexity of standard BO problems with a single block; (b) achieving parallel speedup by sampling $I$ samples for each sampled block per-iteration; and (c) avoiding the computation of the inverse of a high-dimensional Hessian matrix estimator.
arXiv Detail & Related papers (2023-05-30T04:10:11Z) - Mean-based Best Arm Identification in Stochastic Bandits under Reward
Contamination [80.53485617514707]
This paper proposes two algorithms, a gap-based algorithm and one based on the successive elimination, for best arm identification in sub-Gaussian bandits.
Specifically, for the gap-based algorithm, the sample complexity is optimal up to constant factors, while for the successive elimination, it is optimal up to logarithmic factors.
arXiv Detail & Related papers (2021-11-14T21:49:58Z) - Block majorization-minimization with diminishing radius for constrained
nonconvex optimization [9.907540661545328]
Block tensor regularization-minimization (BMM) is a simple iterative algorithm for non constrained optimization that minimizes major surrogates in each block.
We show that BMM can produce a gradient $O(epsilon-2(logepsilon-1)2)$ when convex surrogates are used.
arXiv Detail & Related papers (2020-12-07T07:53:09Z) - Accelerated Message Passing for Entropy-Regularized MAP Inference [89.15658822319928]
Maximum a posteriori (MAP) inference in discrete-valued random fields is a fundamental problem in machine learning.
Due to the difficulty of this problem, linear programming (LP) relaxations are commonly used to derive specialized message passing algorithms.
We present randomized methods for accelerating these algorithms by leveraging techniques that underlie classical accelerated gradient.
arXiv Detail & Related papers (2020-07-01T18:43:32Z) - Lagrangian Decomposition for Neural Network Verification [148.0448557991349]
A fundamental component of neural network verification is the computation of bounds on the values their outputs can take.
We propose a novel approach based on Lagrangian Decomposition.
We show that we obtain bounds comparable with off-the-shelf solvers in a fraction of their running time.
arXiv Detail & Related papers (2020-02-24T17:55:10Z) - Optimal Randomized First-Order Methods for Least-Squares Problems [56.05635751529922]
This class of algorithms encompasses several randomized methods among the fastest solvers for least-squares problems.
We focus on two classical embeddings, namely, Gaussian projections and subsampled Hadamard transforms.
Our resulting algorithm yields the best complexity known for solving least-squares problems with no condition number dependence.
arXiv Detail & Related papers (2020-02-21T17:45:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.