Learned reconstruction methods for inverse problems: sample error
estimates
- URL: http://arxiv.org/abs/2312.14078v1
- Date: Thu, 21 Dec 2023 17:56:19 GMT
- Title: Learned reconstruction methods for inverse problems: sample error
estimates
- Authors: Luca Ratti
- Abstract summary: This dissertation addresses the generalization properties of learned reconstruction methods, and specifically to perform their sample error analysis.
A rather general strategy is proposed, whose assumptions are met for a large class of inverse problems and learned methods.
- Score: 0.8702432681310401
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning-based and data-driven techniques have recently become a subject of
primary interest in the field of reconstruction and regularization of inverse
problems. Besides the development of novel methods, yielding excellent results
in several applications, their theoretical investigation has attracted growing
interest, e.g., on the topics of reliability, stability, and interpretability.
In this work, a general framework is described, allowing us to interpret many
of these techniques in the context of statistical learning. This is not
intended to provide a complete survey of existing methods, but rather to put
them in a working perspective, which naturally allows their theoretical
treatment. The main goal of this dissertation is thereby to address the
generalization properties of learned reconstruction methods, and specifically
to perform their sample error analysis. This task, well-developed in
statistical learning, consists in estimating the dependence of the learned
operators with respect to the data employed for their training. A rather
general strategy is proposed, whose assumptions are met for a large class of
inverse problems and learned methods, as depicted via a selection of examples.
Related papers
- Active learning for regression in engineering populations: A risk-informed approach [0.0]
Regression is a fundamental prediction task common in data-centric engineering applications.
Active learning is an approach for preferentially acquiring feature-label pairs in a resource-efficient manner.
It is shown that the proposed approach has superior performance in terms of expected cost -- maintaining predictive performance while reducing the number of inspections required.
arXiv Detail & Related papers (2024-09-06T15:03:42Z) - Learned Regularization for Inverse Problems: Insights from a Spectral Model [1.4963011898406866]
This chapter provides a theoretically founded investigation of state-of-the-art learning approaches for inverse problems.
We give an extended definition of regularization methods and their convergence in terms of the underlying data distributions.
arXiv Detail & Related papers (2023-12-15T14:50:14Z) - A Unified Generalization Analysis of Re-Weighting and Logit-Adjustment
for Imbalanced Learning [129.63326990812234]
We propose a technique named data-dependent contraction to capture how modified losses handle different classes.
On top of this technique, a fine-grained generalization bound is established for imbalanced learning, which helps reveal the mystery of re-weighting and logit-adjustment.
arXiv Detail & Related papers (2023-10-07T09:15:08Z) - Rethinking Generative Methods for Image Restoration in Physics-based
Vision: A Theoretical Analysis from the Perspective of Information [19.530052941884996]
End-to-end generative methods are considered a more promising solution for image restoration in physics-based vision.
However, existing generative methods still have plenty of room for improvement in quantitative performance.
In this study, we try to re-interpret these generative methods for image restoration tasks using information theory.
arXiv Detail & Related papers (2022-12-05T12:16:27Z) - Generalization Properties of Retrieval-based Models [50.35325326050263]
Retrieval-based machine learning methods have enjoyed success on a wide range of problems.
Despite growing literature showcasing the promise of these models, the theoretical underpinning for such models remains underexplored.
We present a formal treatment of retrieval-based models to characterize their generalization ability.
arXiv Detail & Related papers (2022-10-06T00:33:01Z) - Latent Properties of Lifelong Learning Systems [59.50307752165016]
We introduce an algorithm-agnostic explainable surrogate-modeling approach to estimate latent properties of lifelong learning algorithms.
We validate the approach for estimating these properties via experiments on synthetic data.
arXiv Detail & Related papers (2022-07-28T20:58:13Z) - Learned reconstruction with convergence guarantees [3.9402707512848787]
We will specify relevant notions of convergence for data-driven image reconstruction.
An example that is highlighted is the role of ICNN, offering the possibility to combine the power of deep learning with classical convex regularization theory.
arXiv Detail & Related papers (2022-06-11T06:08:25Z) - A Survey on Deep Semi-supervised Learning [51.26862262550445]
We first present a taxonomy for deep semi-supervised learning that categorizes existing methods.
We then offer a detailed comparison of these methods in terms of the type of losses, contributions, and architecture differences.
arXiv Detail & Related papers (2021-02-28T16:22:58Z) - Nonparametric Estimation of Heterogeneous Treatment Effects: From Theory
to Learning Algorithms [91.3755431537592]
We analyze four broad meta-learning strategies which rely on plug-in estimation and pseudo-outcome regression.
We highlight how this theoretical reasoning can be used to guide principled algorithm design and translate our analyses into practice.
arXiv Detail & Related papers (2021-01-26T17:11:40Z) - Accurate and Robust Feature Importance Estimation under Distribution
Shifts [49.58991359544005]
PRoFILE is a novel feature importance estimation method.
We show significant improvements over state-of-the-art approaches, both in terms of fidelity and robustness.
arXiv Detail & Related papers (2020-09-30T05:29:01Z) - Joint learning of variational representations and solvers for inverse
problems with partially-observed data [13.984814587222811]
In this paper, we design an end-to-end framework allowing to learn actual variational frameworks for inverse problems in a supervised setting.
The variational cost and the gradient-based solver are both stated as neural networks using automatic differentiation for the latter.
This leads to a data-driven discovery of variational models.
arXiv Detail & Related papers (2020-06-05T19:53:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.