Adaptive Block Sparse Regularization under Arbitrary Linear Transform
- URL: http://arxiv.org/abs/2401.15292v4
- Date: Tue, 13 Feb 2024 04:58:26 GMT
- Title: Adaptive Block Sparse Regularization under Arbitrary Linear Transform
- Authors: Takanobu Furuhashi, Hidekata Hontani, Tatsuya Yokota
- Abstract summary: We propose a convex and fast signal reconstruction method for block sparsity under arbitrary linear transform with unknown block structure.
Our work broadens the scope of block sparse regularization, enabling more versatile and powerful applications across various signal processing domains.
- Score: 7.0326155922512275
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a convex and fast signal reconstruction method for block sparsity
under arbitrary linear transform with unknown block structure. The proposed
method is a generalization of the similar existing method and can reconstruct
signals with block sparsity under non-invertible transforms, unlike the
existing method. Our work broadens the scope of block sparse regularization,
enabling more versatile and powerful applications across various signal
processing domains. We derive an iterative algorithm for solving proposed
method and provide conditions for its convergence to the optimal solution.
Numerical experiments demonstrate the effectiveness of the proposed method.
Related papers
- High-dimensional Clustering and Signal Recovery under Block Signals [0.0]
We propose two sets of methods, cross-block feature aggregation PCA (CFA-PCA) and moving average PCA (MA-PCA)
Both methods adaptively utilize block signal structures, applicable to non-Gaussian data with heterogeneous covariance matrices.
We show that the proposed methods can achieve the minimax lower bounds (CMLB) for the sparse and dense block signal regimes.
arXiv Detail & Related papers (2025-04-11T07:54:55Z) - Combinatorial Amplitude Patterns via Nested Quantum Affine Transformations [0.24578723416255746]
This paper introduces a robust and scalable framework for implementing nested affine transformations in quantum circuits.
The proposed method systematically applies sequential affine transformations while preserving state normalization.
The utility of the framework is exemplified through two key applications: financial risk assessment, and discrete signal processing.
arXiv Detail & Related papers (2024-12-12T20:35:56Z) - Verification of Geometric Robustness of Neural Networks via Piecewise Linear Approximation and Lipschitz Optimisation [57.10353686244835]
We address the problem of verifying neural networks against geometric transformations of the input image, including rotation, scaling, shearing, and translation.
The proposed method computes provably sound piecewise linear constraints for the pixel values by using sampling and linear approximations in combination with branch-and-bound Lipschitz.
We show that our proposed implementation resolves up to 32% more verification cases than present approaches.
arXiv Detail & Related papers (2024-08-23T15:02:09Z) - Block Sparse Bayesian Learning: A Diversified Scheme [16.61484758008309]
We introduce a novel prior called Diversified Block Sparse Prior to characterize the widespread block sparsity phenomenon in real-world data.
By allowing diversification on intra-block variance and inter-block correlation matrices, we effectively address the sensitivity issue of existing block sparse learning methods to pre-defined block information.
arXiv Detail & Related papers (2024-02-07T08:18:06Z) - An Accelerated Block Proximal Framework with Adaptive Momentum for
Nonconvex and Nonsmooth Optimization [2.323238724742687]
We propose an accelerated block proximal linear framework with adaptive momentum (ABPL$+$) for nonsmooth and nonsmooth optimization.
We analyze the potential causes of the extrapolation step failing in some algorithms, and resolve this issue by enhancing the comparison process.
We extend our algorithm to any scenario involving updating the gradient step and the linear extrapolation step.
arXiv Detail & Related papers (2023-08-23T13:32:31Z) - Linearization Algorithms for Fully Composite Optimization [61.20539085730636]
This paper studies first-order algorithms for solving fully composite optimization problems convex compact sets.
We leverage the structure of the objective by handling differentiable and non-differentiable separately, linearizing only the smooth parts.
arXiv Detail & Related papers (2023-02-24T18:41:48Z) - Cyclic Block Coordinate Descent With Variance Reduction for Composite
Nonconvex Optimization [26.218670461973705]
We propose methods for solving problems coordinate non-asymptotic gradient norm guarantees.
Our results demonstrate the efficacy of the proposed cyclic-reduced scheme in training deep neural nets.
arXiv Detail & Related papers (2022-12-09T19:17:39Z) - The rate of convergence of Bregman proximal methods: Local geometry vs.
regularity vs. sharpness [33.48987613928269]
We show that the convergence rate of a given method depends sharply on its associated Legendre exponent.
In particular, we show that boundary solutions exhibit a stark separation between methods with a zero and non-zero Legendre exponent.
arXiv Detail & Related papers (2022-11-15T10:49:04Z) - Linear Convergence of Natural Policy Gradient Methods with Log-Linear
Policies [115.86431674214282]
We consider infinite-horizon discounted Markov decision processes and study the convergence rates of the natural policy gradient (NPG) and the Q-NPG methods with the log-linear policy class.
We show that both methods attain linear convergence rates and $mathcalO (1/epsilon2)$ sample complexities using a simple, non-adaptive geometrically increasing step size.
arXiv Detail & Related papers (2022-10-04T06:17:52Z) - Object Representations as Fixed Points: Training Iterative Refinement
Algorithms with Implicit Differentiation [88.14365009076907]
Iterative refinement is a useful paradigm for representation learning.
We develop an implicit differentiation approach that improves the stability and tractability of training.
arXiv Detail & Related papers (2022-07-02T10:00:35Z) - On Constraints in First-Order Optimization: A View from Non-Smooth
Dynamical Systems [99.59934203759754]
We introduce a class of first-order methods for smooth constrained optimization.
Two distinctive features of our approach are that projections or optimizations over the entire feasible set are avoided.
The resulting algorithmic procedure is simple to implement even when constraints are nonlinear.
arXiv Detail & Related papers (2021-07-17T11:45:13Z) - Conditional gradient methods for stochastically constrained convex
minimization [54.53786593679331]
We propose two novel conditional gradient-based methods for solving structured convex optimization problems.
The most important feature of our framework is that only a subset of the constraints is processed at each iteration.
Our algorithms rely on variance reduction and smoothing used in conjunction with conditional gradient steps, and are accompanied by rigorous convergence guarantees.
arXiv Detail & Related papers (2020-07-07T21:26:35Z) - Randomized Block-Diagonal Preconditioning for Parallel Learning [0.0]
We study preconditioned gradient-based optimization methods where the preconditioning matrix has block-diagonal form.
Our main contribution is to demonstrate that the convergence of these methods can significantly be improved by a randomization technique.
arXiv Detail & Related papers (2020-06-24T10:12:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.