Data-Consistent Learning of Inverse Problems
- URL: http://arxiv.org/abs/2601.12831v1
- Date: Mon, 19 Jan 2026 08:41:12 GMT
- Title: Data-Consistent Learning of Inverse Problems
- Authors: Markus Haltmeier, Gyeongha Hwang,
- Abstract summary: Inverse problems are inherently ill-posed, suffering from non-uniqueness and instability.<n>DC networks address this gap by enforcing the measurement model within the network architecture.<n>This approach preserves the theoretical reliability of classical schemes while leveraging the expressive power of data-driven learning.
- Score: 3.160733772636691
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Inverse problems are inherently ill-posed, suffering from non-uniqueness and instability. Classical regularization methods provide mathematically well-founded solutions, ensuring stability and convergence, but often at the cost of reduced flexibility or visual quality. Learned reconstruction methods, such as convolutional neural networks, can produce visually compelling results, yet they typically lack rigorous theoretical guarantees. DC (DC) networks address this gap by enforcing the measurement model within the network architecture. In particular, null-space networks combined with a classical regularization method as an initial reconstruction define a convergent regularization method. This approach preserves the theoretical reliability of classical schemes while leveraging the expressive power of data-driven learning, yielding reconstructions that are both accurate and visually appealing.
Related papers
- Towards A Unified PAC-Bayesian Framework for Norm-based Generalization Bounds [63.47271262149291]
We propose a unified framework for PAC-Bayesian norm-based generalization.<n>The key to our approach is a sensitivity matrix that quantifies the network outputs with respect to structured weight perturbations.<n>We derive a family of generalization bounds that recover several existing PAC-Bayesian results as special cases.
arXiv Detail & Related papers (2026-01-13T00:42:22Z) - Graph Neural Regularizers for PDE Inverse Problems [62.49743146797144]
We present a framework for solving a broad class of ill-posed inverse problems governed by partial differential equations (PDEs)<n>The forward problem is numerically solved using the finite element method (FEM)<n>We employ physics-inspired graph neural networks as learned regularizers, providing a robust, interpretable, and generalizable alternative to standard approaches.
arXiv Detail & Related papers (2025-10-23T21:43:25Z) - Out-of-distribution robustness for multivariate analysis via causal regularisation [4.487663958743944]
We propose a regularisation strategy rooted in causality that ensures robustness against distribution shifts.<n>Building upon the anchor regression framework, we demonstrate how incorporating a straightforward regularisation term into the loss function of classical algorithms.<n>Our framework allows users to efficiently verify the compatibility of a loss function with the regularisation strategy.
arXiv Detail & Related papers (2024-03-04T09:21:10Z) - Convex Latent-Optimized Adversarial Regularizers for Imaging Inverse
Problems [8.33626757808923]
We introduce Convex Latent-d Adrial Regularizers (CLEAR), a novel and interpretable data-driven paradigm.
CLEAR represents a fusion of deep learning (DL) and variational regularization.
Our method consistently outperforms conventional data-driven techniques and traditional regularization approaches.
arXiv Detail & Related papers (2023-09-17T12:06:04Z) - Convergent Data-driven Regularizations for CT Reconstruction [41.791026380947685]
In this work, we investigate simple, but still provably convergent approaches to learning linear regularization methods from data.
We prove that such approaches become convergent regularization methods as well as the fact that the reconstructions they provide are typically much smoother than the training data they were trained on.
arXiv Detail & Related papers (2022-12-14T17:34:03Z) - Deep unfolding as iterative regularization for imaging inverse problems [6.485466095579992]
Deep unfolding methods guide the design of deep neural networks (DNNs) through iterative algorithms.
We prove that the unfolded DNN will converge to it stably.
We demonstrate with an example of MRI reconstruction that the proposed method outperforms conventional unfolding methods.
arXiv Detail & Related papers (2022-11-24T07:38:47Z) - On the generalization of learning algorithms that do not converge [54.122745736433856]
Generalization analyses of deep learning typically assume that the training converges to a fixed point.
Recent results indicate that in practice, the weights of deep neural networks optimized with gradient descent often oscillate indefinitely.
arXiv Detail & Related papers (2022-08-16T21:22:34Z) - Self-Supervised Training with Autoencoders for Visual Anomaly Detection [61.62861063776813]
We focus on a specific use case in anomaly detection where the distribution of normal samples is supported by a lower-dimensional manifold.
We adapt a self-supervised learning regime that exploits discriminative information during training but focuses on the submanifold of normal examples.
We achieve a new state-of-the-art result on the MVTec AD dataset -- a challenging benchmark for visual anomaly detection in the manufacturing domain.
arXiv Detail & Related papers (2022-06-23T14:16:30Z) - Learning Discriminative Shrinkage Deep Networks for Image Deconvolution [122.79108159874426]
We propose an effective non-blind deconvolution approach by learning discriminative shrinkage functions to implicitly model these terms.
Experimental results show that the proposed method performs favorably against the state-of-the-art ones in terms of efficiency and accuracy.
arXiv Detail & Related papers (2021-11-27T12:12:57Z) - Supporting Optimal Phase Space Reconstructions Using Neural Network
Architecture for Time Series Modeling [68.8204255655161]
We propose an artificial neural network with a mechanism to implicitly learn the phase spaces properties.
Our approach is either as competitive as or better than most state-of-the-art strategies.
arXiv Detail & Related papers (2020-06-19T21:04:47Z) - On dissipative symplectic integration with applications to
gradient-based optimization [77.34726150561087]
We propose a geometric framework in which discretizations can be realized systematically.
We show that a generalization of symplectic to nonconservative and in particular dissipative Hamiltonian systems is able to preserve rates of convergence up to a controlled error.
arXiv Detail & Related papers (2020-04-15T00:36:49Z) - Deep synthesis regularization of inverse problems [0.0]
In this paper, we introduce deep synthesis regularization (DESYRE) using neural networks as nonlinear synthesis operator.
The proposed method allows to exploit the deep learning benefits of being well adjustable to available training data.
We present a strategy for constructing a synthesis network as part of an analysis-synthesis sequence together with an appropriate training strategy.
arXiv Detail & Related papers (2020-02-01T06:50:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.