Semi-Sparsity for Smoothing Filters
- URL: http://arxiv.org/abs/2107.00627v1
- Date: Thu, 1 Jul 2021 17:31:42 GMT
- Title: Semi-Sparsity for Smoothing Filters
- Authors: Junqing Huang, Haihui Wang, Xuechao Wang, Michael Ruzhansky
- Abstract summary: We show a new semi-sparsity smoothing algorithm based on a novel sparsity-inducing framework.
We show many benefits to a series of signal/image processing and computer vision applications.
- Score: 1.1404527665142667
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In this paper, we propose an interesting semi-sparsity smoothing algorithm
based on a novel sparsity-inducing optimization framework. This method is
derived from the multiple observations, that is, semi-sparsity prior knowledge
is more universally applicable, especially in areas where sparsity is not fully
admitted, such as polynomial-smoothing surfaces. We illustrate that this
semi-sparsity can be identified into a generalized $L_0$-norm minimization in
higher-order gradient domains, thereby giving rise to a new ``feature-aware''
filtering method with a powerful simultaneous-fitting ability in both sparse
features (singularities and sharpening edges) and non-sparse regions
(polynomial-smoothing surfaces). Notice that a direct solver is always
unavailable due to the non-convexity and combinatorial nature of $L_0$-norm
minimization. Instead, we solve the model based on an efficient half-quadratic
splitting minimization with fast Fourier transforms (FFTs) for acceleration. We
finally demonstrate its versatility and many benefits to a series of
signal/image processing and computer vision applications.
Related papers
- Methods for Convex $(L_0,L_1)$-Smooth Optimization: Clipping, Acceleration, and Adaptivity [50.25258834153574]
We focus on the class of (strongly) convex $(L0)$-smooth functions and derive new convergence guarantees for several existing methods.
In particular, we derive improved convergence rates for Gradient Descent with smoothnessed Gradient Clipping and for Gradient Descent with Polyak Stepsizes.
arXiv Detail & Related papers (2024-09-23T13:11:37Z) - Neural Fields with Thermal Activations for Arbitrary-Scale Super-Resolution [56.089473862929886]
We present a novel way to design neural fields such that points can be queried with an adaptive Gaussian PSF.
With its theoretically guaranteed anti-aliasing, our method sets a new state of the art for arbitrary-scale single image super-resolution.
arXiv Detail & Related papers (2023-11-29T14:01:28Z) - Riemannian stochastic optimization methods avoid strict saddle points [68.80251170757647]
We show that policies under study avoid strict saddle points / submanifolds with probability 1.
This result provides an important sanity check as it shows that, almost always, the limit state of an algorithm can only be a local minimizer.
arXiv Detail & Related papers (2023-11-04T11:12:24Z) - Distributed Extra-gradient with Optimal Complexity and Communication
Guarantees [60.571030754252824]
We consider monotone variational inequality (VI) problems in multi-GPU settings where multiple processors/workers/clients have access to local dual vectors.
Extra-gradient, which is a de facto algorithm for monotone VI problems, has not been designed to be communication-efficient.
We propose a quantized generalized extra-gradient (Q-GenX), which is an unbiased and adaptive compression method tailored to solve VIs.
arXiv Detail & Related papers (2023-08-17T21:15:04Z) - Optimal Algorithms for Stochastic Complementary Composite Minimization [55.26935605535377]
Inspired by regularization techniques in statistics and machine learning, we study complementary composite minimization.
We provide novel excess risk bounds, both in expectation and with high probability.
Our algorithms are nearly optimal, which we prove via novel lower complexity bounds for this class of problems.
arXiv Detail & Related papers (2022-11-03T12:40:24Z) - Faster Projection-Free Augmented Lagrangian Methods via Weak Proximal
Oracle [16.290192687098383]
This paper considers a convex composite optimization problem with affine constraints.
Motivated by high-dimensional applications in which exact projection/proximal computations are not tractable, we propose a textitprojection-free augmented Lagrangian-based method.
arXiv Detail & Related papers (2022-10-25T12:51:43Z) - Smooth over-parameterized solvers for non-smooth structured optimization [3.756550107432323]
Non-smoothness encodes structural constraints on the solutions, such as sparsity, group sparsity, low-rank edges and sharp edges.
We operate a non-weighted but smooth overparametrization of the underlying nonsmooth optimization problems.
Our main contribution is to apply the Variable Projection (VarPro) which defines a new formulation by explicitly minimizing over part of the variables.
arXiv Detail & Related papers (2022-05-03T09:23:07Z) - Faster One-Sample Stochastic Conditional Gradient Method for Composite
Convex Minimization [61.26619639722804]
We propose a conditional gradient method (CGM) for minimizing convex finite-sum objectives formed as a sum of smooth and non-smooth terms.
The proposed method, equipped with an average gradient (SAG) estimator, requires only one sample per iteration. Nevertheless, it guarantees fast convergence rates on par with more sophisticated variance reduction techniques.
arXiv Detail & Related papers (2022-02-26T19:10:48Z) - Smooth Bilevel Programming for Sparse Regularization [5.177947445379688]
Iteratively reweighted least square (IRLS) is a popular approach to solve sparsity-enforcing regression problems in machine learning.
We show how a surprisingly reparametrization of IRLS, coupled with a bilevel scheme, achieves topranging of sparsity.
arXiv Detail & Related papers (2021-06-02T19:18:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.