Measurement-Consistent Networks via a Deep Implicit Layer for Solving
Inverse Problems
- URL: http://arxiv.org/abs/2211.03177v1
- Date: Sun, 6 Nov 2022 17:05:04 GMT
- Title: Measurement-Consistent Networks via a Deep Implicit Layer for Solving
Inverse Problems
- Authors: Rahul Mourya and Jo\~ao F. C. Mota
- Abstract summary: End-to-end deep neural networks (DNNs) have become state-of-the-art (SOTA) for solving inverse problems.
These networks are sensitive to minor variations in the training pipeline and often fail to reconstruct small but important details.
We propose a framework that transforms any DNN for inverse problems into a measurement-consistent one.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: End-to-end deep neural networks (DNNs) have become state-of-the-art (SOTA)
for solving inverse problems. Despite their outstanding performance, during
deployment, such networks are sensitive to minor variations in the training
pipeline and often fail to reconstruct small but important details, a feature
critical in medical imaging, astronomy, or defence. Such instabilities in DNNs
can be explained by the fact that they ignore the forward measurement model
during deployment, and thus fail to enforce consistency between their output
and the input measurements. To overcome this, we propose a framework that
transforms any DNN for inverse problems into a measurement-consistent one. This
is done by appending to it an implicit layer (or deep equilibrium network)
designed to solve a model-based optimization problem. The implicit layer
consists of a shallow learnable network that can be integrated into the
end-to-end training. Experiments on single-image super-resolution show that the
proposed framework leads to significant improvements in reconstruction quality
and robustness over the SOTA DNNs.
Related papers
- Point-aware Interaction and CNN-induced Refinement Network for RGB-D
Salient Object Detection [95.84616822805664]
We introduce CNNs-assisted Transformer architecture and propose a novel RGB-D SOD network with Point-aware Interaction and CNN-induced Refinement.
In order to alleviate the block effect and detail destruction problems brought by the Transformer naturally, we design a CNN-induced refinement (CNNR) unit for content refinement and supplementation.
arXiv Detail & Related papers (2023-08-17T11:57:49Z) - An Automata-Theoretic Approach to Synthesizing Binarized Neural Networks [13.271286153792058]
Quantized neural networks (QNNs) have been developed, with binarized neural networks (BNNs) restricted to binary values as a special case.
This paper presents an automata-theoretic approach to synthesizing BNNs that meet designated properties.
arXiv Detail & Related papers (2023-07-29T06:27:28Z) - Characteristics-Informed Neural Networks for Forward and Inverse
Hyperbolic Problems [0.0]
We propose characteristic-informed neural networks (CINN) for solving forward and inverse problems involving hyperbolic PDEs.
CINN encodes the characteristics of the PDE in a general-purpose deep neural network trained with the usual MSE data-fitting regression loss.
Preliminary results indicate that CINN is able to improve on the accuracy of the baseline PINN, while being nearly twice as fast to train and avoiding non-physical solutions.
arXiv Detail & Related papers (2022-12-28T18:38:53Z) - RDRN: Recursively Defined Residual Network for Image Super-Resolution [58.64907136562178]
Deep convolutional neural networks (CNNs) have obtained remarkable performance in single image super-resolution.
We propose a novel network architecture which utilizes attention blocks efficiently.
arXiv Detail & Related papers (2022-11-17T11:06:29Z) - Deep network series for large-scale high-dynamic range imaging [2.3759432635713895]
We propose a new approach for large-scale high-dynamic range computational imaging.
Deep Neural Networks (DNNs) trained end-to-end can solve linear inverse imaging problems almost instantaneously.
Alternative Plug-and-Play approaches have proven effective to address high-dynamic range challenges, but rely on highly iterative algorithms.
arXiv Detail & Related papers (2022-10-28T11:13:41Z) - An Optimal Time Variable Learning Framework for Deep Neural Networks [0.0]
The proposed framework can be applied to any of the existing networks such as ResNet, DenseNet or Fractional-DNN.
The proposed approach is applied to an ill-posed 3D-Maxwell's equation.
arXiv Detail & Related papers (2022-04-18T19:29:03Z) - Learning Deep Interleaved Networks with Asymmetric Co-Attention for
Image Restoration [65.11022516031463]
We present a deep interleaved network (DIN) that learns how information at different states should be combined for high-quality (HQ) images reconstruction.
In this paper, we propose asymmetric co-attention (AsyCA) which is attached at each interleaved node to model the feature dependencies.
Our presented DIN can be trained end-to-end and applied to various image restoration tasks.
arXiv Detail & Related papers (2020-10-29T15:32:00Z) - Modeling from Features: a Mean-field Framework for Over-parameterized
Deep Neural Networks [54.27962244835622]
This paper proposes a new mean-field framework for over- parameterized deep neural networks (DNNs)
In this framework, a DNN is represented by probability measures and functions over its features in the continuous limit.
We illustrate the framework via the standard DNN and the Residual Network (Res-Net) architectures.
arXiv Detail & Related papers (2020-07-03T01:37:16Z) - Boosting Deep Neural Networks with Geometrical Prior Knowledge: A Survey [77.99182201815763]
Deep Neural Networks (DNNs) achieve state-of-the-art results in many different problem settings.
DNNs are often treated as black box systems, which complicates their evaluation and validation.
One promising field, inspired by the success of convolutional neural networks (CNNs) in computer vision tasks, is to incorporate knowledge about symmetric geometrical transformations.
arXiv Detail & Related papers (2020-06-30T14:56:05Z) - Binary Neural Networks: A Survey [126.67799882857656]
The binary neural network serves as a promising technique for deploying deep models on resource-limited devices.
The binarization inevitably causes severe information loss, and even worse, its discontinuity brings difficulty to the optimization of the deep network.
We present a survey of these algorithms, mainly categorized into the native solutions directly conducting binarization, and the optimized ones using techniques like minimizing the quantization error, improving the network loss function, and reducing the gradient error.
arXiv Detail & Related papers (2020-03-31T16:47:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.