Dictionary and prior learning with unrolled algorithms for unsupervised
inverse problems
- URL: http://arxiv.org/abs/2106.06338v1
- Date: Fri, 11 Jun 2021 12:21:26 GMT
- Title: Dictionary and prior learning with unrolled algorithms for unsupervised
inverse problems
- Authors: Beno\^it Mal\'ezieux, Thomas Moreau, Matthieu Kowalski
- Abstract summary: We study Dictionary and Prior learning from degraded measurements as a bi-level problem.
We take advantage of unrolled algorithms to solve approximate formulations of Synthesis and Analysis.
- Score: 12.54744464424354
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Inverse problems consist in recovering a signal given noisy observations. One
classical resolution approach is to leverage sparsity and integrate prior
knowledge of the signal to the reconstruction algorithm to get a plausible
solution. Still, this prior might not be sufficiently adapted to the data. In
this work, we study Dictionary and Prior learning from degraded measurements as
a bi-level problem, and we take advantage of unrolled algorithms to solve
approximate formulations of Synthesis and Analysis. We provide an empirical and
theoretical analysis of automatic differentiation for Dictionary Learning to
understand better the pros and cons of unrolling in this context. We find that
unrolled algorithms speed up the recovery process for a small number of
iterations by improving the gradient estimation. Then we compare Analysis and
Synthesis by evaluating the performance of unrolled algorithms for inverse
problems, without access to any ground truth data for several classes of
dictionaries and priors. While Analysis can achieve good results,Synthesis is
more robust and performs better. Finally, we illustrate our method on pattern
and structure learning tasks from degraded measurements.
Related papers
- A Guide to Stochastic Optimisation for Large-Scale Inverse Problems [4.926711494319977]
optimisation algorithms are the de facto standard for machine learning with large amounts of data.
We provide a comprehensive account of the state-of-the-art in optimisation from the viewpoint of inverse problems.
We focus on the challenges for optimisation that are unique and are not commonly encountered in machine learning.
arXiv Detail & Related papers (2024-06-10T15:02:30Z) - Performance Evaluation and Comparison of a New Regression Algorithm [4.125187280299247]
We compare the performance of a newly proposed regression algorithm against four conventional machine learning algorithms.
The reader is free to replicate our results since we have provided the source code in a GitHub repository.
arXiv Detail & Related papers (2023-06-15T13:01:16Z) - Representation Learning with Multi-Step Inverse Kinematics: An Efficient
and Optimal Approach to Rich-Observation RL [106.82295532402335]
Existing reinforcement learning algorithms suffer from computational intractability, strong statistical assumptions, and suboptimal sample complexity.
We provide the first computationally efficient algorithm that attains rate-optimal sample complexity with respect to the desired accuracy level.
Our algorithm, MusIK, combines systematic exploration with representation learning based on multi-step inverse kinematics.
arXiv Detail & Related papers (2023-04-12T14:51:47Z) - Amortized Implicit Differentiation for Stochastic Bilevel Optimization [53.12363770169761]
We study a class of algorithms for solving bilevel optimization problems in both deterministic and deterministic settings.
We exploit a warm-start strategy to amortize the estimation of the exact gradient.
By using this framework, our analysis shows these algorithms to match the computational complexity of methods that have access to an unbiased estimate of the gradient.
arXiv Detail & Related papers (2021-11-29T15:10:09Z) - Simple Stochastic and Online Gradient DescentAlgorithms for Pairwise
Learning [65.54757265434465]
Pairwise learning refers to learning tasks where the loss function depends on a pair instances.
Online descent (OGD) is a popular approach to handle streaming data in pairwise learning.
In this paper, we propose simple and online descent to methods for pairwise learning.
arXiv Detail & Related papers (2021-11-23T18:10:48Z) - Performance Analysis of Fractional Learning Algorithms [32.21539962359158]
It is unclear whether the proclaimed superiority over conventional algorithms is well-grounded or is a myth as their performance has never been extensively analyzed.
In this article, a rigorous analysis of fractional variants of the least mean squares and steepest descent algorithms is performed.
Their origins and consequences on the performance of the learning algorithms are discussed and swift ready-witted remedies are proposed.
arXiv Detail & Related papers (2021-10-11T12:06:44Z) - Fractal Structure and Generalization Properties of Stochastic
Optimization Algorithms [71.62575565990502]
We prove that the generalization error of an optimization algorithm can be bounded on the complexity' of the fractal structure that underlies its generalization measure.
We further specialize our results to specific problems (e.g., linear/logistic regression, one hidden/layered neural networks) and algorithms.
arXiv Detail & Related papers (2021-06-09T08:05:36Z) - Estimating leverage scores via rank revealing methods and randomization [50.591267188664666]
We study algorithms for estimating the statistical leverage scores of rectangular dense or sparse matrices of arbitrary rank.
Our approach is based on combining rank revealing methods with compositions of dense and sparse randomized dimensionality reduction transforms.
arXiv Detail & Related papers (2021-05-23T19:21:55Z) - Learning to solve TV regularized problems with unrolled algorithms [18.241062505073234]
Total Variation (TV) is a popular regularization strategy that promotes piece-wise constant signals.
We develop and characterize two approaches to do so, describe their benefits and limitations, and discuss the regime where they can actually improve over iterative procedures.
arXiv Detail & Related papers (2020-10-19T14:19:02Z) - Semi-Supervised Learning with Meta-Gradient [123.26748223837802]
We propose a simple yet effective meta-learning algorithm in semi-supervised learning.
We find that the proposed algorithm performs favorably against state-of-the-art methods.
arXiv Detail & Related papers (2020-07-08T08:48:56Z) - Stacked Generalizations in Imbalanced Fraud Data Sets using Resampling
Methods [2.741266294612776]
This study uses stacked generalization, which is a two-step process of combining machine learning methods, called meta or super learners, for improving the performance of algorithms.
Building a test harness that accounts for all permutations of algorithm sample set pairs demonstrates that the complex, intrinsic data structures are all thoroughly tested.
arXiv Detail & Related papers (2020-04-03T20:38:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.