Theoretical Perspectives on Deep Learning Methods in Inverse Problems
- URL: http://arxiv.org/abs/2206.14373v1
- Date: Wed, 29 Jun 2022 02:37:50 GMT
- Title: Theoretical Perspectives on Deep Learning Methods in Inverse Problems
- Authors: Jonathan Scarlett, Reinhard Heckel, Miguel R. D. Rodrigues, Paul Hand,
and Yonina C. Eldar
- Abstract summary: We focus on generative priors, untrained neural network priors, and unfolding algorithms.
In addition to summarizing existing results in these topics, we highlight several ongoing challenges and open problems.
- Score: 115.93934028666845
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, there have been significant advances in the use of deep
learning methods in inverse problems such as denoising, compressive sensing,
inpainting, and super-resolution. While this line of works has predominantly
been driven by practical algorithms and experiments, it has also given rise to
a variety of intriguing theoretical problems. In this paper, we survey some of
the prominent theoretical developments in this line of works, focusing in
particular on generative priors, untrained neural network priors, and unfolding
algorithms. In addition to summarizing existing results in these topics, we
highlight several ongoing challenges and open problems.
Related papers
- Computability of Classification and Deep Learning: From Theoretical Limits to Practical Feasibility through Quantization [53.15874572081944]
We study computability in the deep learning framework from two perspectives.
We show algorithmic limitations in training deep neural networks even in cases where the underlying problem is well-behaved.
Finally, we show that in quantized versions of classification and deep network training, computability restrictions do not arise or can be overcome to a certain degree.
arXiv Detail & Related papers (2024-08-12T15:02:26Z) - Open Problem: Order Optimal Regret Bounds for Kernel-Based Reinforcement Learning [10.358743901458615]
Reinforcement Learning (RL) has shown great empirical success in various application domains.
We will highlight this open problem, overview existing partial results, and discuss related challenges.
arXiv Detail & Related papers (2024-06-21T15:43:02Z) - An Over Complete Deep Learning Method for Inverse Problems [15.919986945096182]
We show that machine learning techniques can face challenges when applied to some exemplary problems.
We show that similar to previous works on over-complete dictionaries, it is possible to overcome these shortcomings by embedding the solution into higher dimensions.
We demonstrate the merit of this approach on several exemplary and common inverse problems.
arXiv Detail & Related papers (2024-02-07T08:38:12Z) - Deep Causal Learning: Representation, Discovery and Inference [2.696435860368848]
Causal learning reveals the essential relationships that underpin phenomena and delineates the mechanisms by which the world evolves.
Traditional causal learning methods face numerous challenges and limitations, including high-dimensional variables, unstructured variables, optimization problems, unobserved confounders, selection biases, and estimation inaccuracies.
Deep causal learning, which leverages deep neural networks, offers innovative insights and solutions for addressing these challenges.
arXiv Detail & Related papers (2022-11-07T09:00:33Z) - The Modern Mathematics of Deep Learning [8.939008609565368]
We describe the new field of mathematical analysis of deep learning.
This field emerged around a list of research questions that were not answered within the classical of learning theory.
For selected approaches, we describe the main ideas in more detail.
arXiv Detail & Related papers (2021-05-09T21:30:42Z) - A Survey on Deep Semi-supervised Learning [51.26862262550445]
We first present a taxonomy for deep semi-supervised learning that categorizes existing methods.
We then offer a detailed comparison of these methods in terms of the type of losses, contributions, and architecture differences.
arXiv Detail & Related papers (2021-02-28T16:22:58Z) - Developing Constrained Neural Units Over Time [81.19349325749037]
This paper focuses on an alternative way of defining Neural Networks, that is different from the majority of existing approaches.
The structure of the neural architecture is defined by means of a special class of constraints that are extended also to the interaction with data.
The proposed theory is cast into the time domain, in which data are presented to the network in an ordered manner.
arXiv Detail & Related papers (2020-09-01T09:07:25Z) - Deep Learning Methods for Solving Linear Inverse Problems: Research
Directions and Paradigms [15.996292006766089]
The rapid development of deep learning provides a fresh perspective for solving the linear inverse problem.
We review how deep learning methods are used in solving different linear inverse problems.
We explore the structured neural network architectures that incorporate knowledge used in traditional methods.
arXiv Detail & Related papers (2020-07-27T03:10:58Z) - Deep Learning Techniques for Inverse Problems in Imaging [102.30524824234264]
Recent work in machine learning shows that deep neural networks can be used to solve a wide variety of inverse problems.
We present a taxonomy that can be used to categorize different problems and reconstruction methods.
arXiv Detail & Related papers (2020-05-12T18:35:55Z) - Generalization in Deep Learning [103.91623583928852]
This paper provides theoretical insights into why and how deep learning can generalize well, despite its large capacity, complexity, possible algorithmic instability, nonrobustness, and sharp minima.
We also discuss approaches to provide non-vacuous generalization guarantees for deep learning.
arXiv Detail & Related papers (2017-10-16T02:21:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.