Learned Block Iterative Shrinkage Thresholding Algorithm for
Photothermal Super Resolution Imaging
- URL: http://arxiv.org/abs/2012.03547v2
- Date: Thu, 10 Dec 2020 14:15:57 GMT
- Title: Learned Block Iterative Shrinkage Thresholding Algorithm for
Photothermal Super Resolution Imaging
- Authors: Samim Ahmadi, Jan Christian Hauffen, Linh K\"astner, Peter Jung,
Giuseppe Caire, Mathias Ziegler
- Abstract summary: We propose a learned block-sparse optimization approach using an iterative algorithm unfolded into a deep neural network.
We show the benefits of using a learned block iterative shrinkage thresholding algorithm that is able to learn the choice of regularization parameters.
- Score: 52.42007686600479
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Block-sparse regularization is already well-known in active thermal imaging
and is used for multiple measurement based inverse problems. The main
bottleneck of this method is the choice of regularization parameters which
differs for each experiment. To avoid time-consuming manually selected
regularization parameter, we propose a learned block-sparse optimization
approach using an iterative algorithm unfolded into a deep neural network. More
precisely, we show the benefits of using a learned block iterative shrinkage
thresholding algorithm that is able to learn the choice of regularization
parameters. In addition, this algorithm enables the determination of a suitable
weight matrix to solve the underlying inverse problem. Therefore, in this paper
we present the algorithm and compare it with state of the art block iterative
shrinkage thresholding using synthetically generated test data and experimental
test data from active thermography for defect reconstruction. Our results show
that the use of the learned block-sparse optimization approach provides smaller
normalized mean square errors for a small fixed number of iterations than
without learning. Thus, this new approach allows to improve the convergence
speed and only needs a few iterations to generate accurate defect
reconstruction in photothermal super resolution imaging.
Related papers
- Dense Visual Odometry Using Genetic Algorithm [0.0]
In this paper, a new algorithm is developed for visual odometry using a sequence of RGB-D images.
The proposed iterative genetic algorithm searches using particles to estimate the optimal motion.
We prove the efficiency of our innovative algorithm on a large set of images.
arXiv Detail & Related papers (2023-11-10T16:09:01Z) - Fast Screening Rules for Optimal Design via Quadratic Lasso
Reformulation [0.135975510645475]
In this work, we derive safe screening rules that can be used to discard inessential samples.
The new tests are much faster to compute, especially for problems involving a parameter space of high dimension.
We show how an existing homotopy algorithm to compute the regularization path of the lasso method can be reparametrized with respect to the squared $ell_$-penalty.
arXiv Detail & Related papers (2023-10-13T08:10:46Z) - An Optimization-based Deep Equilibrium Model for Hyperspectral Image
Deconvolution with Convergence Guarantees [71.57324258813675]
We propose a novel methodology for addressing the hyperspectral image deconvolution problem.
A new optimization problem is formulated, leveraging a learnable regularizer in the form of a neural network.
The derived iterative solver is then expressed as a fixed-point calculation problem within the Deep Equilibrium framework.
arXiv Detail & Related papers (2023-06-10T08:25:16Z) - Low-rank extended Kalman filtering for online learning of neural
networks from streaming data [71.97861600347959]
We propose an efficient online approximate Bayesian inference algorithm for estimating the parameters of a nonlinear function from a potentially non-stationary data stream.
The method is based on the extended Kalman filter (EKF), but uses a novel low-rank plus diagonal decomposition of the posterior matrix.
In contrast to methods based on variational inference, our method is fully deterministic, and does not require step-size tuning.
arXiv Detail & Related papers (2023-05-31T03:48:49Z) - Optimizing CT Scan Geometries With and Without Gradients [7.788823739816626]
We show that gradient-based optimization algorithms are a possible alternative to gradient-free algorithms.
gradient-based algorithms converge substantially faster while being comparable to gradient-free algorithms in terms of capture range and robustness to the number of free parameters.
arXiv Detail & Related papers (2023-02-13T10:44:41Z) - Fast Multi-grid Methods for Minimizing Curvature Energy [6.882141405929301]
We propose fast multi-grid algorithms for minimizing mean curvature and Gaussian curvature energy functionals.
No artificial parameters are introduced in our formulation, which guarantees the robustness of the proposed algorithm.
Numerical experiments are presented on both image denoising and CT reconstruction problem to demonstrate the ability to recover image texture.
arXiv Detail & Related papers (2022-04-17T04:34:38Z) - An Improved Frequent Directions Algorithm for Low-Rank Approximation via
Block Krylov Iteration [11.62834880315581]
This paper presents a fast and accurate Frequent Directions algorithm named as r-BKIFD.
The proposed r-BKIFD has a comparable error bound with original Frequent Directions, and the approximation error can be arbitrarily small when the number of iterations is chosen appropriately.
arXiv Detail & Related papers (2021-09-24T01:36:42Z) - End-to-end Interpretable Learning of Non-blind Image Deblurring [102.75982704671029]
Non-blind image deblurring is typically formulated as a linear least-squares problem regularized by natural priors on the corresponding sharp picture's gradients.
We propose to precondition the Richardson solver using approximate inverse filters of the (known) blur and natural image prior kernels.
arXiv Detail & Related papers (2020-07-03T15:45:01Z) - Accelerated Message Passing for Entropy-Regularized MAP Inference [89.15658822319928]
Maximum a posteriori (MAP) inference in discrete-valued random fields is a fundamental problem in machine learning.
Due to the difficulty of this problem, linear programming (LP) relaxations are commonly used to derive specialized message passing algorithms.
We present randomized methods for accelerating these algorithms by leveraging techniques that underlie classical accelerated gradient.
arXiv Detail & Related papers (2020-07-01T18:43:32Z) - Effective Dimension Adaptive Sketching Methods for Faster Regularized
Least-Squares Optimization [56.05635751529922]
We propose a new randomized algorithm for solving L2-regularized least-squares problems based on sketching.
We consider two of the most popular random embeddings, namely, Gaussian embeddings and the Subsampled Randomized Hadamard Transform (SRHT)
arXiv Detail & Related papers (2020-06-10T15:00:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.