Sparse Dictionary Learning for Image Recovery by Iterative Shrinkage
- URL: http://arxiv.org/abs/2503.10732v2
- Date: Wed, 02 Apr 2025 08:08:10 GMT
- Title: Sparse Dictionary Learning for Image Recovery by Iterative Shrinkage
- Authors: Shima Shabani, Mohammadsadegh Khoshghiaferezaee, Michael Breuß,
- Abstract summary: We study the sparse coding problem in the context of sparse dictionary learning for image recovery.<n>We consider and compare several state-of-the-art sparse optimization methods constructed using the shrinkage operation.
- Score: 0.1433758865948252
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper we study the sparse coding problem in the context of sparse dictionary learning for image recovery. To this end, we consider and compare several state-of-the-art sparse optimization methods constructed using the shrinkage operation. As the mathematical setting of these methods, we consider an online approach as algorithmical basis together with the basis pursuit denoising problem that arises by the convex optimization approach to the dictionary learning problem. By a dedicated construction of datasets and corresponding dictionaries, we study the effect of enlarging the underlying learning database on reconstruction quality making use of several error measures. Our study illuminates that the choice of the optimization method may be practically important in the context of availability of training data. In the context of different settings for training data as may be considered part of our study, we illuminate the computational efficiency of the assessed optimization methods.
Related papers
- Equation discovery framework EPDE: Towards a better equation discovery [50.79602839359522]
We enhance the EPDE algorithm -- an evolutionary optimization-based discovery framework.
Our approach generates terms using fundamental building blocks such as elementary functions and individual differentials.
We validate our algorithm's noise resilience and overall performance by comparing its results with those from the state-of-the-art equation discovery framework SINDy.
arXiv Detail & Related papers (2024-12-28T15:58:44Z) - Conditional and Residual Methods in Scalable Coding for Humans and
Machines [26.32381277880991]
We present methods for conditional and residual coding in the context of scalable coding for humans and machines.
Our focus is on optimizing the rate-distortion performance of the reconstruction task using the information available in the computer vision task.
arXiv Detail & Related papers (2023-05-04T05:32:44Z) - Learning Large-scale Neural Fields via Context Pruned Meta-Learning [60.93679437452872]
We introduce an efficient optimization-based meta-learning technique for large-scale neural field training.
We show how gradient re-scaling at meta-test time allows the learning of extremely high-quality neural fields.
Our framework is model-agnostic, intuitive, straightforward to implement, and shows significant reconstruction improvements for a wide range of signals.
arXiv Detail & Related papers (2023-02-01T17:32:16Z) - An Efficient Approximate Method for Online Convolutional Dictionary
Learning [32.90534837348151]
We present a novel approximate OCDL method that incorporates sparse decomposition of the training samples.
The proposed method substantially reduces computational costs while preserving the effectiveness of the state-of-the-art OCDL algorithms.
arXiv Detail & Related papers (2023-01-25T13:40:18Z) - Batch Active Learning from the Perspective of Sparse Approximation [12.51958241746014]
Active learning enables efficient model training by leveraging interactions between machine learning agents and human annotators.
We study and propose a novel framework that formulates batch active learning from the sparse approximation's perspective.
Our active learning method aims to find an informative subset from the unlabeled data pool such that the corresponding training loss function approximates its full data pool counterpart.
arXiv Detail & Related papers (2022-11-01T03:20:28Z) - Object Representations as Fixed Points: Training Iterative Refinement
Algorithms with Implicit Differentiation [88.14365009076907]
Iterative refinement is a useful paradigm for representation learning.
We develop an implicit differentiation approach that improves the stability and tractability of training.
arXiv Detail & Related papers (2022-07-02T10:00:35Z) - Neural Improvement Heuristics for Graph Combinatorial Optimization
Problems [49.85111302670361]
We introduce a novel Neural Improvement (NI) model capable of handling graph-based problems where information is encoded in the nodes, edges, or both.
The presented model serves as a fundamental component for hill-climbing-based algorithms that guide the selection of neighborhood operations for each.
arXiv Detail & Related papers (2022-06-01T10:35:29Z) - Deep Equilibrium Assisted Block Sparse Coding of Inter-dependent
Signals: Application to Hyperspectral Imaging [71.57324258813675]
A dataset of inter-dependent signals is defined as a matrix whose columns demonstrate strong dependencies.
A neural network is employed to act as structure prior and reveal the underlying signal interdependencies.
Deep unrolling and Deep equilibrium based algorithms are developed, forming highly interpretable and concise deep-learning-based architectures.
arXiv Detail & Related papers (2022-03-29T21:00:39Z) - Unsupervised feature selection via self-paced learning and low-redundant
regularization [6.083524716031565]
An unsupervised feature selection is proposed by integrating the framework of self-paced learning and subspace learning.
The convergence of the method is proved theoretically and experimentally.
The experimental results show that the proposed method can improve the performance of clustering methods and outperform other compared algorithms.
arXiv Detail & Related papers (2021-12-14T08:28:19Z) - A Field Guide to Federated Optimization [161.3779046812383]
Federated learning and analytics are a distributed approach for collaboratively learning models (or statistics) from decentralized data.
This paper provides recommendations and guidelines on formulating, designing, evaluating and analyzing federated optimization algorithms.
arXiv Detail & Related papers (2021-07-14T18:09:08Z) - Dictionary and prior learning with unrolled algorithms for unsupervised
inverse problems [12.54744464424354]
We study Dictionary and Prior learning from degraded measurements as a bi-level problem.
We take advantage of unrolled algorithms to solve approximate formulations of Synthesis and Analysis.
arXiv Detail & Related papers (2021-06-11T12:21:26Z) - Learning with Differentiable Perturbed Optimizers [54.351317101356614]
We propose a systematic method to transform operations into operations that are differentiable and never locally constant.
Our approach relies on perturbeds, and can be used readily together with existing solvers.
We show how this framework can be connected to a family of losses developed in structured prediction, and give theoretical guarantees for their use in learning tasks.
arXiv Detail & Related papers (2020-02-20T11:11:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.