Joint learning of variational representations and solvers for inverse
problems with partially-observed data
- URL: http://arxiv.org/abs/2006.03653v1
- Date: Fri, 5 Jun 2020 19:53:34 GMT
- Title: Joint learning of variational representations and solvers for inverse
problems with partially-observed data
- Authors: Ronan Fablet, Lucas Drumetz, Francois Rousseau
- Abstract summary: In this paper, we design an end-to-end framework allowing to learn actual variational frameworks for inverse problems in a supervised setting.
The variational cost and the gradient-based solver are both stated as neural networks using automatic differentiation for the latter.
This leads to a data-driven discovery of variational models.
- Score: 13.984814587222811
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Designing appropriate variational regularization schemes is a crucial part of
solving inverse problems, making them better-posed and guaranteeing that the
solution of the associated optimization problem satisfies desirable properties.
Recently, learning-based strategies have appeared to be very efficient for
solving inverse problems, by learning direct inversion schemes or plug-and-play
regularizers from available pairs of true states and observations. In this
paper, we go a step further and design an end-to-end framework allowing to
learn actual variational frameworks for inverse problems in such a supervised
setting. The variational cost and the gradient-based solver are both stated as
neural networks using automatic differentiation for the latter. We can jointly
learn both components to minimize the data reconstruction error on the true
states. This leads to a data-driven discovery of variational models. We
consider an application to inverse problems with incomplete datasets (image
inpainting and multivariate time series interpolation). We experimentally
illustrate that this framework can lead to a significant gain in terms of
reconstruction performance, including w.r.t. the direct minimization of the
variational formulation derived from the known generative model.
Related papers
- Solving Inverse Problems with Model Mismatch using Untrained Neural Networks within Model-based Architectures [14.551812310439004]
We introduce an untrained forward model residual block within the model-based architecture to match the data consistency in the measurement domain for each instance.
Our approach offers a unified solution that is less parameter-sensitive, requires no additional data, and enables simultaneous fitting of the forward model and reconstruction in a single pass.
arXiv Detail & Related papers (2024-03-07T19:02:13Z) - End-to-End Meta-Bayesian Optimisation with Transformer Neural Processes [52.818579746354665]
This paper proposes the first end-to-end differentiable meta-BO framework that generalises neural processes to learn acquisition functions via transformer architectures.
We enable this end-to-end framework with reinforcement learning (RL) to tackle the lack of labelled acquisition data.
arXiv Detail & Related papers (2023-05-25T10:58:46Z) - Transformer Meets Boundary Value Inverse Problems [4.165221477234755]
Transformer-based deep direct sampling method is proposed for solving a class of boundary value inverse problem.
A real-time reconstruction is achieved by evaluating the learned inverse operator between carefully designed data and reconstructed images.
arXiv Detail & Related papers (2022-09-29T17:45:25Z) - A Novel Plug-and-Play Approach for Adversarially Robust Generalization [38.72514422694518]
We propose a robust framework that employs adversarially robust training to safeguard the ML models against perturbed testing data.
Our contributions can be seen from both computational and statistical perspectives.
arXiv Detail & Related papers (2022-08-19T17:02:55Z) - Amortized Inference for Causal Structure Learning [72.84105256353801]
Learning causal structure poses a search problem that typically involves evaluating structures using a score or independence test.
We train a variational inference model to predict the causal structure from observational/interventional data.
Our models exhibit robust generalization capabilities under substantial distribution shift.
arXiv Detail & Related papers (2022-05-25T17:37:08Z) - A variational inference framework for inverse problems [0.39373541926236766]
A framework is presented for fitting inverse problem models via variational Bayes approximations.
This methodology guarantees flexibility to statistical model specification for a broad range of applications.
An image processing application and a simulation exercise motivated by biomedical problems reveal the computational advantage offered by variational Bayes.
arXiv Detail & Related papers (2021-03-10T07:37:20Z) - Learning Variational Data Assimilation Models and Solvers [34.22350850350653]
We introduce end-to-end neural network architectures for data assimilation.
A key feature of the proposed end-to-end learning architecture is that we may train the NN models using both supervised and unsupervised strategies.
arXiv Detail & Related papers (2020-07-25T14:28:48Z) - Accounting for Unobserved Confounding in Domain Generalization [107.0464488046289]
This paper investigates the problem of learning robust, generalizable prediction models from a combination of datasets.
Part of the challenge of learning robust models lies in the influence of unobserved confounders.
We demonstrate the empirical performance of our approach on healthcare data from different modalities.
arXiv Detail & Related papers (2020-07-21T08:18:06Z) - Total Deep Variation: A Stable Regularizer for Inverse Problems [71.90933869570914]
We introduce the data-driven general-purpose total deep variation regularizer.
In its core, a convolutional neural network extracts local features on multiple scales and in successive blocks.
We achieve state-of-the-art results for numerous imaging tasks.
arXiv Detail & Related papers (2020-06-15T21:54:15Z) - Dynamic Federated Learning [57.14673504239551]
Federated learning has emerged as an umbrella term for centralized coordination strategies in multi-agent environments.
We consider a federated learning model where at every iteration, a random subset of available agents perform local updates based on their data.
Under a non-stationary random walk model on the true minimizer for the aggregate optimization problem, we establish that the performance of the architecture is determined by three factors, namely, the data variability at each agent, the model variability across all agents, and a tracking term that is inversely proportional to the learning rate of the algorithm.
arXiv Detail & Related papers (2020-02-20T15:00:54Z) - Total Deep Variation for Linear Inverse Problems [71.90933869570914]
We propose a novel learnable general-purpose regularizer exploiting recent architectural design patterns from deep learning.
We show state-of-the-art performance for classical image restoration and medical image reconstruction problems.
arXiv Detail & Related papers (2020-01-14T19:01:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.