Deep Learning and Inverse Problems
- URL: http://arxiv.org/abs/2309.00802v1
- Date: Sat, 2 Sep 2023 02:53:54 GMT
- Title: Deep Learning and Inverse Problems
- Authors: Ali Mohammad-Djafari, Ning Chu, Li Wang, Liang Yu
- Abstract summary: In computer vision, image and video processing, these methods are mainly based on Neural Networks (NN) and in particular Convolutional NN (CNN)
- Score: 8.315530799440554
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Machine Learning (ML) methods and tools have gained great success in many
data, signal, image and video processing tasks, such as classification,
clustering, object detection, semantic segmentation, language processing,
Human-Machine interface, etc. In computer vision, image and video processing,
these methods are mainly based on Neural Networks (NN) and in particular
Convolutional NN (CNN), and more generally Deep NN. Inverse problems arise
anywhere we have indirect measurement. As, in general, those inverse problems
are ill-posed, to obtain satisfactory solutions for them needs prior
information. Different regularization methods have been proposed, where the
problem becomes the optimization of a criterion with a likelihood term and a
regularization term. The main difficulty, however, in great dimensional real
applications, remains the computational cost. Using NN, and in particular Deep
Learning (DL) surrogate models and approximate computation, can become very
helpful. In this work, we focus on NN and DL particularly adapted for inverse
problems. We consider two cases: First the case where the forward operator is
known and used as physics constraint, the second more general data driven DL
methods.
Related papers
- What to Do When Your Discrete Optimization Is the Size of a Neural
Network? [24.546550334179486]
Machine learning applications using neural networks involve solving discrete optimization problems.
classical approaches used in discrete settings do not scale well to large neural networks.
We take continuation path (CP) methods to represent using purely the former and Monte Carlo (MC) methods to represent the latter.
arXiv Detail & Related papers (2024-02-15T21:57:43Z) - Differentiable Visual Computing for Inverse Problems and Machine
Learning [27.45555082573493]
Visual computing methods are used to analyze geometry, physically simulate solids, fluids, and other media, and render the world via optical techniques.
Deep learning (DL) allows for the construction of general algorithmic models, side stepping the need for a purely first principles-based approach to problem solving.
DL is powered by highly parameterized neural network architectures -- universal function approximators -- and gradient-based search algorithms.
arXiv Detail & Related papers (2023-11-21T23:02:58Z) - Deep Learning and Bayesian inference for Inverse Problems [8.315530799440554]
We focus on NN, DL and more specifically the Bayesian DL particularly adapted for inverse problems.
We consider two cases: First the case where the forward operator is known and used as physics constraint, the second more general data driven DL methods.
arXiv Detail & Related papers (2023-08-28T04:27:45Z) - Towards Better Out-of-Distribution Generalization of Neural Algorithmic
Reasoning Tasks [51.8723187709964]
We study the OOD generalization of neural algorithmic reasoning tasks.
The goal is to learn an algorithm from input-output pairs using deep neural networks.
arXiv Detail & Related papers (2022-11-01T18:33:20Z) - Learning to Detect Critical Nodes in Sparse Graphs via Feature Importance Awareness [53.351863569314794]
The critical node problem (CNP) aims to find a set of critical nodes from a network whose deletion maximally degrades the pairwise connectivity of the residual network.
This work proposes a feature importance-aware graph attention network for node representation.
It combines it with dueling double deep Q-network to create an end-to-end algorithm to solve CNP for the first time.
arXiv Detail & Related papers (2021-12-03T14:23:05Z) - Characterizing possible failure modes in physics-informed neural
networks [55.83255669840384]
Recent work in scientific machine learning has developed so-called physics-informed neural network (PINN) models.
We demonstrate that, while existing PINN methodologies can learn good models for relatively trivial problems, they can easily fail to learn relevant physical phenomena even for simple PDEs.
We show that these possible failure modes are not due to the lack of expressivity in the NN architecture, but that the PINN's setup makes the loss landscape very hard to optimize.
arXiv Detail & Related papers (2021-09-02T16:06:45Z) - A neural anisotropic view of underspecification in deep learning [60.119023683371736]
We show that the way neural networks handle the underspecification of problems is highly dependent on the data representation.
Our results highlight that understanding the architectural inductive bias in deep learning is fundamental to address the fairness, robustness, and generalization of these systems.
arXiv Detail & Related papers (2021-04-29T14:31:09Z) - Offline Model-Based Optimization via Normalized Maximum Likelihood
Estimation [101.22379613810881]
We consider data-driven optimization problems where one must maximize a function given only queries at a fixed set of points.
This problem setting emerges in many domains where function evaluation is a complex and expensive process.
We propose a tractable approximation that allows us to scale our method to high-capacity neural network models.
arXiv Detail & Related papers (2021-02-16T06:04:27Z) - On the Treatment of Optimization Problems with L1 Penalty Terms via
Multiobjective Continuation [0.0]
We present a novel algorithm that allows us to gain detailed insight into the effects of sparsity in linear and nonlinear optimization.
Our method can be seen as a generalization of well-known homotopy methods for linear regression problems to the nonlinear case.
arXiv Detail & Related papers (2020-12-14T13:00:50Z) - Boosting Deep Neural Networks with Geometrical Prior Knowledge: A Survey [77.99182201815763]
Deep Neural Networks (DNNs) achieve state-of-the-art results in many different problem settings.
DNNs are often treated as black box systems, which complicates their evaluation and validation.
One promising field, inspired by the success of convolutional neural networks (CNNs) in computer vision tasks, is to incorporate knowledge about symmetric geometrical transformations.
arXiv Detail & Related papers (2020-06-30T14:56:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.