Deep Learning and Bayesian inference for Inverse Problems
- URL: http://arxiv.org/abs/2308.15492v1
- Date: Mon, 28 Aug 2023 04:27:45 GMT
- Title: Deep Learning and Bayesian inference for Inverse Problems
- Authors: Ali Mohammad-Djafari, Ning Chu, Li Wang, Liang Yu
- Abstract summary: We focus on NN, DL and more specifically the Bayesian DL particularly adapted for inverse problems.
We consider two cases: First the case where the forward operator is known and used as physics constraint, the second more general data driven DL methods.
- Score: 8.315530799440554
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Inverse problems arise anywhere we have indirect measurement. As, in general
they are ill-posed, to obtain satisfactory solutions for them needs prior
knowledge. Classically, different regularization methods and Bayesian inference
based methods have been proposed. As these methods need a great number of
forward and backward computations, they become costly in computation, in
particular, when the forward or generative models are complex and the
evaluation of the likelihood becomes very costly. Using Deep Neural Network
surrogate models and approximate computation can become very helpful. However,
accounting for the uncertainties, we need first understand the Bayesian Deep
Learning and then, we can see how we can use them for inverse problems. In this
work, we focus on NN, DL and more specifically the Bayesian DL particularly
adapted for inverse problems. We first give details of Bayesian DL approximate
computations with exponential families, then we will see how we can use them
for inverse problems. We consider two cases: First the case where the forward
operator is known and used as physics constraint, the second more general data
driven DL methods. keyword: Neural Network, Variational Bayesian inference,
Bayesian Deep Learning (DL), Inverse problems, Physics based DL.
Related papers
- Bayesian Physics Informed Neural Networks for Linear Inverse problems [1.30536490219656]
Inverse problems arise in science and engineering where we need to infer on a quantity from indirect observation.
BPINN concept integrates physical laws with deep learning techniques to enhance the speed, accuracy and efficiency.
We consider two cases of supervised and unsupervised for training step, obtain the expressions of the posterior probability of the unknown variables, and deduce the posterior laws of the NN's parameters.
arXiv Detail & Related papers (2025-02-18T14:52:57Z) - Unrolled denoising networks provably learn optimal Bayesian inference [54.79172096306631]
We prove the first rigorous learning guarantees for neural networks based on unrolling approximate message passing (AMP)
For compressed sensing, we prove that when trained on data drawn from a product prior, the layers of the network converge to the same denoisers used in Bayes AMP.
arXiv Detail & Related papers (2024-09-19T17:56:16Z) - Deep Backtracking Counterfactuals for Causally Compliant Explanations [57.94160431716524]
We introduce a practical method called deep backtracking counterfactuals (DeepBC) for computing backtracking counterfactuals in structural causal models.
As a special case, our formulation reduces to methods in the field of counterfactual explanations.
arXiv Detail & Related papers (2023-10-11T17:11:10Z) - Deep Learning and Inverse Problems [8.315530799440554]
In computer vision, image and video processing, these methods are mainly based on Neural Networks (NN) and in particular Convolutional NN (CNN)
arXiv Detail & Related papers (2023-09-02T02:53:54Z) - DRIP: Deep Regularizers for Inverse Problems [15.919986945096182]
We introduce a new family of neural regularizers for the solution of inverse problems.
These regularizers are based on a variational formulation and are guaranteed to fit the data.
We demonstrate their use on a number of highly ill-posed problems, from image deblurring to limited angle tomography.
arXiv Detail & Related papers (2023-03-30T10:35:00Z) - Scaling Laws Beyond Backpropagation [64.0476282000118]
We study the ability of Direct Feedback Alignment to train causal decoder-only Transformers efficiently.
We find that DFA fails to offer more efficient scaling than backpropagation.
arXiv Detail & Related papers (2022-10-26T10:09:14Z) - Semi-supervised Invertible DeepONets for Bayesian Inverse Problems [8.594140167290098]
DeepONets offer a powerful, data-driven tool for solving parametric PDEs by learning operators.
In this work, we employ physics-informed DeepONets in the context of high-dimensional, Bayesian inverse problems.
arXiv Detail & Related papers (2022-09-06T18:55:06Z) - Exploring Bayesian Deep Learning for Urgent Instructor Intervention Need
in MOOC Forums [58.221459787471254]
Massive Open Online Courses (MOOCs) have become a popular choice for e-learning thanks to their great flexibility.
Due to large numbers of learners and their diverse backgrounds, it is taxing to offer real-time support.
With the large volume of posts and high workloads for MOOC instructors, it is unlikely that the instructors can identify all learners requiring intervention.
This paper explores for the first time Bayesian deep learning on learner-based text posts with two methods: Monte Carlo Dropout and Variational Inference.
arXiv Detail & Related papers (2021-04-26T15:12:13Z) - Deep Feedback Inverse Problem Solver [141.26041463617963]
We present an efficient, effective, and generic approach towards solving inverse problems.
We leverage the feedback signal provided by the forward process and learn an iterative update model.
Our approach does not have any restrictions on the forward process; it does not require any prior knowledge either.
arXiv Detail & Related papers (2021-01-19T16:49:06Z) - Scaling Up Bayesian Uncertainty Quantification for Inverse Problems
using Deep Neural Networks [2.455468619225742]
We propose a novel CES approach for Bayesian inference based on deep neural network (DNN) models for the emulation phase.
The resulting algorithm is not only computationally more efficient, but also less sensitive to the training set.
Overall, our method, henceforth called emphReduced- Dimension Emulative Autoencoder Monte Carlo (DREAM) algorithm, is able to scale Bayesian UQ up to thousands of dimensions in physics-constrained inverse problems.
arXiv Detail & Related papers (2021-01-11T14:18:38Z) - Solving Sparse Linear Inverse Problems in Communication Systems: A Deep
Learning Approach With Adaptive Depth [51.40441097625201]
We propose an end-to-end trainable deep learning architecture for sparse signal recovery problems.
The proposed method learns how many layers to execute to emit an output, and the network depth is dynamically adjusted for each task in the inference phase.
arXiv Detail & Related papers (2020-10-29T06:32:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.