Deep neural networks for inverse problems with pseudodifferential
operators: an application to limited-angle tomography
- URL: http://arxiv.org/abs/2006.01620v1
- Date: Tue, 2 Jun 2020 14:03:41 GMT
- Title: Deep neural networks for inverse problems with pseudodifferential
operators: an application to limited-angle tomography
- Authors: Tatiana A. Bubba, Mathilde Galinier, Matti Lassas, Marco Prato, Luca
Ratti, Samuli Siltanen
- Abstract summary: We propose a novel convolutional neural network (CNN) designed for learning pseudodifferential operators ($Psi$DOs) in the context of linear inverse problems.
We show that, under rather general assumptions on the forward operator, the unfolded iterations of ISTA can be interpreted as the successive layers of a CNN.
In particular, we prove that, in the case of LA-CT, the operations of upscaling, downscaling and convolution, can be exactly determined by combining the convolutional nature of the limited angle X-ray transform and basic properties defining a wavelet system.
- Score: 0.4110409960377149
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a novel convolutional neural network (CNN), called $\Psi$DONet,
designed for learning pseudodifferential operators ($\Psi$DOs) in the context
of linear inverse problems. Our starting point is the Iterative Soft
Thresholding Algorithm (ISTA), a well-known algorithm to solve
sparsity-promoting minimization problems. We show that, under rather general
assumptions on the forward operator, the unfolded iterations of ISTA can be
interpreted as the successive layers of a CNN, which in turn provides fairly
general network architectures that, for a specific choice of the parameters
involved, allow to reproduce ISTA, or a perturbation of ISTA for which we can
bound the coefficients of the filters. Our case study is the limited-angle
X-ray transform and its application to limited-angle computed tomography
(LA-CT). In particular, we prove that, in the case of LA-CT, the operations of
upscaling, downscaling and convolution, which characterize our $\Psi$DONet and
most deep learning schemes, can be exactly determined by combining the
convolutional nature of the limited angle X-ray transform and basic properties
defining an orthogonal wavelet system. We test two different implementations of
$\Psi$DONet on simulated data from limited-angle geometry, generated from the
ellipse data set. Both implementations provide equally good and noteworthy
preliminary results, showing the potential of the approach we propose and
paving the way to applying the same idea to other convolutional operators which
are $\Psi$DOs or Fourier integral operators.
Related papers
- Local Loss Optimization in the Infinite Width: Stable Parameterization of Predictive Coding Networks and Target Propagation [8.35644084613785]
We introduce the maximal update parameterization ($mu$P) in the infinite-width limit for two representative designs of local targets.
By analyzing deep linear networks, we found that PC's gradients interpolate between first-order and Gauss-Newton-like gradients.
We demonstrate that, in specific standard settings, PC in the infinite-width limit behaves more similarly to the first-order gradient.
arXiv Detail & Related papers (2024-11-04T11:38:27Z) - GLinSAT: The General Linear Satisfiability Neural Network Layer By Accelerated Gradient Descent [12.409030267572243]
We make a batch of neural network outputs satisfy bounded and general linear constraints.
This is the first general linear satisfiability layer in which all the operations are differentiable and matrix-factorization-free.
arXiv Detail & Related papers (2024-09-26T03:12:53Z) - The Convex Landscape of Neural Networks: Characterizing Global Optima
and Stationary Points via Lasso Models [75.33431791218302]
Deep Neural Network Network (DNN) models are used for programming purposes.
In this paper we examine the use of convex neural recovery models.
We show that all the stationary non-dimensional objective objective can be characterized as the standard a global subsampled convex solvers program.
We also show that all the stationary non-dimensional objective objective can be characterized as the standard a global subsampled convex solvers program.
arXiv Detail & Related papers (2023-12-19T23:04:56Z) - Stable Nonconvex-Nonconcave Training via Linear Interpolation [51.668052890249726]
This paper presents a theoretical analysis of linearahead as a principled method for stabilizing (large-scale) neural network training.
We argue that instabilities in the optimization process are often caused by the nonmonotonicity of the loss landscape and show how linear can help by leveraging the theory of nonexpansive operators.
arXiv Detail & Related papers (2023-10-20T12:45:12Z) - A max-affine spline approximation of neural networks using the Legendre
transform of a convex-concave representation [0.3007949058551534]
This work presents a novel algorithm for transforming a neural network into a spline representation.
The only constraint is that the function be bounded and possess a well-define second derivative.
It can also be performed over the whole network rather than on each layer independently.
arXiv Detail & Related papers (2023-07-16T17:01:20Z) - On the Global Convergence of Natural Actor-Critic with Two-layer Neural
Network Parametrization [38.32265770020665]
We study a natural actor-critic algorithm that utilizes neural networks to represent the critic.
Our aim is to establish sample complexity guarantees for this algorithm, achieving a deeper understanding of its performance characteristics.
arXiv Detail & Related papers (2023-06-18T06:22:04Z) - A Unified Algebraic Perspective on Lipschitz Neural Networks [88.14073994459586]
This paper introduces a novel perspective unifying various types of 1-Lipschitz neural networks.
We show that many existing techniques can be derived and generalized via finding analytical solutions of a common semidefinite programming (SDP) condition.
Our approach, called SDP-based Lipschitz Layers (SLL), allows us to design non-trivial yet efficient generalization of convex potential layers.
arXiv Detail & Related papers (2023-03-06T14:31:09Z) - Unfolding Projection-free SDP Relaxation of Binary Graph Classifier via
GDPA Linearization [59.87663954467815]
Algorithm unfolding creates an interpretable and parsimonious neural network architecture by implementing each iteration of a model-based algorithm as a neural layer.
In this paper, leveraging a recent linear algebraic theorem called Gershgorin disc perfect alignment (GDPA), we unroll a projection-free algorithm for semi-definite programming relaxation (SDR) of a binary graph.
Experimental results show that our unrolled network outperformed pure model-based graph classifiers, and achieved comparable performance to pure data-driven networks but using far fewer parameters.
arXiv Detail & Related papers (2021-09-10T07:01:15Z) - Convolutional Proximal Neural Networks and Plug-and-Play Algorithms [0.225596179391365]
In this paper, we introduce convolutional proximal neural networks (cPNNs)
For filters of full length, we propose a submanifold of the Stiefel manifold to train cPNNs.
Then, we investigate how scaled cPNNs with a prescribed Lipschitz constant can be used for denoising signals images.
arXiv Detail & Related papers (2020-11-04T13:32:46Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z) - Convex Geometry and Duality of Over-parameterized Neural Networks [70.15611146583068]
We develop a convex analytic approach to analyze finite width two-layer ReLU networks.
We show that an optimal solution to the regularized training problem can be characterized as extreme points of a convex set.
In higher dimensions, we show that the training problem can be cast as a finite dimensional convex problem with infinitely many constraints.
arXiv Detail & Related papers (2020-02-25T23:05:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.