Deep synthesis regularization of inverse problems
- URL: http://arxiv.org/abs/2002.00155v1
- Date: Sat, 1 Feb 2020 06:50:42 GMT
- Title: Deep synthesis regularization of inverse problems
- Authors: Daniel Obmann, Johannes Schwab and Markus Haltmeier
- Abstract summary: In this paper, we introduce deep synthesis regularization (DESYRE) using neural networks as nonlinear synthesis operator.
The proposed method allows to exploit the deep learning benefits of being well adjustable to available training data.
We present a strategy for constructing a synthesis network as part of an analysis-synthesis sequence together with an appropriate training strategy.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, a large number of efficient deep learning methods for solving
inverse problems have been developed and show outstanding numerical
performance. For these deep learning methods, however, a solid theoretical
foundation in the form of reconstruction guarantees is missing. In contrast,
for classical reconstruction methods, such as convex variational and
frame-based regularization, theoretical convergence and convergence rate
results are well established. In this paper, we introduce deep synthesis
regularization (DESYRE) using neural networks as nonlinear synthesis operator
bridging the gap between these two worlds. The proposed method allows to
exploit the deep learning benefits of being well adjustable to available
training data and on the other hand comes with a solid mathematical foundation.
We present a complete convergence analysis with convergence rates for the
proposed deep synthesis regularization. We present a strategy for constructing
a synthesis network as part of an analysis-synthesis sequence together with an
appropriate training strategy. Numerical results show the plausibility of our
approach.
Related papers
- Data-Consistent Learning of Inverse Problems [3.160733772636691]
Inverse problems are inherently ill-posed, suffering from non-uniqueness and instability.<n>DC networks address this gap by enforcing the measurement model within the network architecture.<n>This approach preserves the theoretical reliability of classical schemes while leveraging the expressive power of data-driven learning.
arXiv Detail & Related papers (2026-01-19T08:41:12Z) - Iso-Riemannian Optimization on Learned Data Manifolds [6.345340156849189]
We introduce a principled framework for optimization on learned data manifold using iso-Riemannian geometry.<n>We show that our approach yields interpretable barycentres, improved clustering, and provably efficient solutions to inverse problems.<n>These results establish that optimization under iso-Riemannian geometry can overcome distortions inherent to learned manifold mappings.
arXiv Detail & Related papers (2025-10-23T22:34:55Z) - Relative Entropy Regularized Reinforcement Learning for Efficient Encrypted Policy Synthesis [0.6249768559720122]
We propose an efficient encrypted policy synthesis to develop privacy-preserving model-based reinforcement learning.<n>We first demonstrate that the relative-entropy-regularized reinforcement learning framework offers a computationally convenient linear and min-free'' structure for value iteration.<n>Results demonstrate the effectiveness of the RERL framework in integrating FHE for encrypted policy synthesis.
arXiv Detail & Related papers (2025-06-14T05:41:03Z) - FBSJNN: A Theoretically Interpretable and Efficiently Deep Learning method for Solving Partial Integro-Differential Equations [0.0]
We propose a novel framework for solving a class of Partial Integro-Differential Equations (PIDEs) through a deep learning-based approach.
This method, termed the Forward-Backward Jump Neural Network (FBNN), is both theoretically interpretable and numerically effective.
Numerical experiments indicate that the FBSJNN scheme can obtain numerical solutions with a relative error on the scale of $10-3$.
arXiv Detail & Related papers (2024-12-15T01:37:48Z) - Component-based Sketching for Deep ReLU Nets [55.404661149594375]
We develop a sketching scheme based on deep net components for various tasks.
We transform deep net training into a linear empirical risk minimization problem.
We show that the proposed component-based sketching provides almost optimal rates in approximating saturated functions.
arXiv Detail & Related papers (2024-09-21T15:30:43Z) - Low-resolution Prior Equilibrium Network for CT Reconstruction [3.5639148953570836]
We present a novel deep learning-based CT reconstruction model, where the low-resolution image is introduced to obtain an effective regularization term for improving the networks robustness.
Experimental results on both sparse-view and limited-angle reconstruction problems are provided, demonstrating that our end-to-end low-resolution prior equilibrium model outperforms other state-of-the-art methods in terms of noise reduction, contrast-to-noise ratio, and preservation of edge details.
arXiv Detail & Related papers (2024-01-28T13:59:58Z) - An Optimization-based Deep Equilibrium Model for Hyperspectral Image
Deconvolution with Convergence Guarantees [71.57324258813675]
We propose a novel methodology for addressing the hyperspectral image deconvolution problem.
A new optimization problem is formulated, leveraging a learnable regularizer in the form of a neural network.
The derived iterative solver is then expressed as a fixed-point calculation problem within the Deep Equilibrium framework.
arXiv Detail & Related papers (2023-06-10T08:25:16Z) - Linearization Algorithms for Fully Composite Optimization [61.20539085730636]
This paper studies first-order algorithms for solving fully composite optimization problems convex compact sets.
We leverage the structure of the objective by handling differentiable and non-differentiable separately, linearizing only the smooth parts.
arXiv Detail & Related papers (2023-02-24T18:41:48Z) - Dictionary and prior learning with unrolled algorithms for unsupervised
inverse problems [12.54744464424354]
We study Dictionary and Prior learning from degraded measurements as a bi-level problem.
We take advantage of unrolled algorithms to solve approximate formulations of Synthesis and Analysis.
arXiv Detail & Related papers (2021-06-11T12:21:26Z) - Fractal Structure and Generalization Properties of Stochastic
Optimization Algorithms [71.62575565990502]
We prove that the generalization error of an optimization algorithm can be bounded on the complexity' of the fractal structure that underlies its generalization measure.
We further specialize our results to specific problems (e.g., linear/logistic regression, one hidden/layered neural networks) and algorithms.
arXiv Detail & Related papers (2021-06-09T08:05:36Z) - End-to-end reconstruction meets data-driven regularization for inverse
problems [2.800608984818919]
We propose an unsupervised approach for learning end-to-end reconstruction operators for ill-posed inverse problems.
The proposed method combines the classical variational framework with iterative unrolling.
We demonstrate with the example of X-ray computed tomography (CT) that our approach outperforms state-of-the-art unsupervised methods.
arXiv Detail & Related papers (2021-06-07T12:05:06Z) - Joint Network Topology Inference via Structured Fusion Regularization [70.30364652829164]
Joint network topology inference represents a canonical problem of learning multiple graph Laplacian matrices from heterogeneous graph signals.
We propose a general graph estimator based on a novel structured fusion regularization.
We show that the proposed graph estimator enjoys both high computational efficiency and rigorous theoretical guarantee.
arXiv Detail & Related papers (2021-03-05T04:42:32Z) - Deep Equilibrium Architectures for Inverse Problems in Imaging [14.945209750917483]
Recent efforts on solving inverse problems in imaging via deep neural networks use architectures inspired by a fixed number of iterations of an optimization method.
This paper describes an alternative approach corresponding to an em infinite number of iterations, yielding up to a 4dB PSNR improvement in reconstruction accuracy.
arXiv Detail & Related papers (2021-02-16T03:49:58Z) - A Convergence Theory Towards Practical Over-parameterized Deep Neural
Networks [56.084798078072396]
We take a step towards closing the gap between theory and practice by significantly improving the known theoretical bounds on both the network width and the convergence time.
We show that convergence to a global minimum is guaranteed for networks with quadratic widths in the sample size and linear in their depth at a time logarithmic in both.
Our analysis and convergence bounds are derived via the construction of a surrogate network with fixed activation patterns that can be transformed at any time to an equivalent ReLU network of a reasonable size.
arXiv Detail & Related papers (2021-01-12T00:40:45Z) - Learning Fast Approximations of Sparse Nonlinear Regression [50.00693981886832]
In this work, we bridge the gap by introducing the Threshold Learned Iterative Shrinkage Algorithming (NLISTA)
Experiments on synthetic data corroborate our theoretical results and show our method outperforms state-of-the-art methods.
arXiv Detail & Related papers (2020-10-26T11:31:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.