Inverse Problems with Learned Forward Operators
- URL: http://arxiv.org/abs/2311.12528v2
- Date: Mon, 18 Mar 2024 15:28:00 GMT
- Title: Inverse Problems with Learned Forward Operators
- Authors: Simon Arridge, Andreas Hauptmann, Yury Korolev,
- Abstract summary: This chapter reviews reconstruction methods in inverse problems with learned forward operators that follow two different paradigms.
The framework of regularisation by projection is then used to find a reconstruction.
A common theme emerges: both methods require, or at least benefit from, training data not only for the forward operator, but also for its adjoint.
- Score: 2.162017337541015
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Solving inverse problems requires the knowledge of the forward operator, but accurate models can be computationally expensive and hence cheaper variants that do not compromise the reconstruction quality are desired. This chapter reviews reconstruction methods in inverse problems with learned forward operators that follow two different paradigms. The first one is completely agnostic to the forward operator and learns its restriction to the subspace spanned by the training data. The framework of regularisation by projection is then used to find a reconstruction. The second one uses a simplified model of the physics of the measurement process and only relies on the training data to learn a model correction. We present the theory of these two approaches and compare them numerically. A common theme emerges: both methods require, or at least benefit from, training data not only for the forward operator, but also for its adjoint.
Related papers
- TAEN: A Model-Constrained Tikhonov Autoencoder Network for Forward and Inverse Problems [0.6144680854063939]
Real-time solvers for forward and inverse problems are essential in engineering and science applications.
Machine learning surrogate models have emerged as promising alternatives to traditional methods, offering substantially reduced computational time.
These models typically demand extensive training datasets to achieve robust generalization across diverse scenarios.
We propose a novel Tikhonov autoencoder model-constrained framework, called TAE, capable of learning both forward and inverse surrogate models using a single arbitrary observation sample.
arXiv Detail & Related papers (2024-12-09T21:36:42Z) - Parameter-Efficient and Memory-Efficient Tuning for Vision Transformer: A Disentangled Approach [87.8330887605381]
We show how to adapt a pre-trained Vision Transformer to downstream recognition tasks with only a few learnable parameters.
We synthesize a task-specific query with a learnable and lightweight module, which is independent of the pre-trained model.
Our method achieves state-of-the-art performance under memory constraints, showcasing its applicability in real-world situations.
arXiv Detail & Related papers (2024-07-09T15:45:04Z) - In-Context Convergence of Transformers [63.04956160537308]
We study the learning dynamics of a one-layer transformer with softmax attention trained via gradient descent.
For data with imbalanced features, we show that the learning dynamics take a stage-wise convergence process.
arXiv Detail & Related papers (2023-10-08T17:55:33Z) - A Study of Forward-Forward Algorithm for Self-Supervised Learning [65.268245109828]
We study the performance of forward-forward vs. backpropagation for self-supervised representation learning.
Our main finding is that while the forward-forward algorithm performs comparably to backpropagation during (self-supervised) training, the transfer performance is significantly lagging behind in all the studied settings.
arXiv Detail & Related papers (2023-09-21T10:14:53Z) - Toward Theoretical Guidance for Two Common Questions in Practical
Cross-Validation based Hyperparameter Selection [72.76113104079678]
We show the first theoretical treatments of two common questions in cross-validation based hyperparameter selection.
We show that these generalizations can, respectively, always perform at least as well as always performing retraining or never performing retraining.
arXiv Detail & Related papers (2023-01-12T16:37:12Z) - What learning algorithm is in-context learning? Investigations with
linear models [87.91612418166464]
We investigate the hypothesis that transformer-based in-context learners implement standard learning algorithms implicitly.
We show that trained in-context learners closely match the predictors computed by gradient descent, ridge regression, and exact least-squares regression.
Preliminary evidence that in-context learners share algorithmic features with these predictors.
arXiv Detail & Related papers (2022-11-28T18:59:51Z) - Mixture Manifold Networks: A Computationally Efficient Baseline for
Inverse Modeling [7.891408798179181]
We propose and show the efficacy of a new method to address generic inverse problems.
Recent work has shown impressive results using deep learning, but we note that there is a trade-off between model performance and computational time.
arXiv Detail & Related papers (2022-11-25T20:18:07Z) - Transformer Meets Boundary Value Inverse Problems [4.165221477234755]
Transformer-based deep direct sampling method is proposed for solving a class of boundary value inverse problem.
A real-time reconstruction is achieved by evaluating the learned inverse operator between carefully designed data and reconstructed images.
arXiv Detail & Related papers (2022-09-29T17:45:25Z) - Analyzing Transformers in Embedding Space [59.434807802802105]
We present a theoretical analysis where all parameters of a trained Transformer are interpreted by projecting them into the embedding space.
We show that parameters of both pretrained and fine-tuned models can be interpreted in embedding space.
Our findings open the door to interpretation methods that, at least in part, abstract away from model specifics and operate in the embedding space only.
arXiv Detail & Related papers (2022-09-06T14:36:57Z) - Sampling Theorems for Unsupervised Learning in Linear Inverse Problems [11.54982866872911]
This paper presents necessary and sufficient sampling conditions for learning the signal model from partial measurements.
As our results are agnostic of the learning algorithm, they shed light into the fundamental limitations of learning from incomplete data.
arXiv Detail & Related papers (2022-03-23T16:17:22Z) - Deep learning for inverse problems with unknown operator [0.0]
In inverse problems where the forward operator $T$ is unknown, we have access to training data consisting of functions $f_i$ and their noisy images $Tf_i$.
We propose a new method that requires minimal assumptions on the data, and prove reconstruction rates that depend on the number of training points and the noise level.
arXiv Detail & Related papers (2021-08-05T17:21:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.