Fast and Robust Cascade Model for Multiple Degradation Single Image
Super-Resolution
- URL: http://arxiv.org/abs/2011.07068v1
- Date: Mon, 16 Nov 2020 18:59:49 GMT
- Title: Fast and Robust Cascade Model for Multiple Degradation Single Image
Super-Resolution
- Authors: Santiago L\'opez-Tapia and Nicol\'as P\'erez de la Blanca
- Abstract summary: Single Image Super-Resolution (SISR) is one of the low-level computer vision problems that has received increased attention in the last few years.
Here, we propose a new formulation of the Convolutional Neural Network (CNN) cascade model.
A new densely connected CNN-architecture is proposed where the output of each sub- module is restricted using some external knowledge.
- Score: 2.1574781022415364
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Single Image Super-Resolution (SISR) is one of the low-level computer vision
problems that has received increased attention in the last few years. Current
approaches are primarily based on harnessing the power of deep learning models
and optimization techniques to reverse the degradation model. Owing to its
hardness, isotropic blurring or Gaussians with small anisotropic deformations
have been mainly considered. Here, we widen this scenario by including large
non-Gaussian blurs that arise in real camera movements. Our approach leverages
the degradation model and proposes a new formulation of the Convolutional
Neural Network (CNN) cascade model, where each network sub-module is
constrained to solve a specific degradation: deblurring or upsampling. A new
densely connected CNN-architecture is proposed where the output of each
sub-module is restricted using some external knowledge to focus it on its
specific task. As far we know this use of domain-knowledge to module-level is a
novelty in SISR. To fit the finest model, a final sub-module takes care of the
residual errors propagated by the previous sub-modules. We check our model with
three state of the art (SOTA) datasets in SISR and compare the results with the
SOTA models. The results show that our model is the only one able to manage our
wider set of deformations. Furthermore, our model overcomes all current SOTA
methods for a standard set of deformations. In terms of computational load, our
model also improves on the two closest competitors in terms of efficiency.
Although the approach is non-blind and requires an estimation of the blur
kernel, it shows robustness to blur kernel estimation errors, making it a good
alternative to blind models.
Related papers
- Deep learning for model correction of dynamical systems with data scarcity [0.0]
We present a deep learning framework for correcting existing dynamical system models utilizing only a scarce high-fidelity data set.
We focus on the case when the amount of high-fidelity data is so small that most of the existing data driven modeling methods cannot be applied.
arXiv Detail & Related papers (2024-10-23T14:33:11Z) - SMILE: Zero-Shot Sparse Mixture of Low-Rank Experts Construction From Pre-Trained Foundation Models [85.67096251281191]
We present an innovative approach to model fusion called zero-shot Sparse MIxture of Low-rank Experts (SMILE) construction.
SMILE allows for the upscaling of source models into an MoE model without extra data or further training.
We conduct extensive experiments across diverse scenarios, such as image classification and text generation tasks, using full fine-tuning and LoRA fine-tuning.
arXiv Detail & Related papers (2024-08-19T17:32:15Z) - Solving Inverse Problems with Model Mismatch using Untrained Neural Networks within Model-based Architectures [14.551812310439004]
We introduce an untrained forward model residual block within the model-based architecture to match the data consistency in the measurement domain for each instance.
Our approach offers a unified solution that is less parameter-sensitive, requires no additional data, and enables simultaneous fitting of the forward model and reconstruction in a single pass.
arXiv Detail & Related papers (2024-03-07T19:02:13Z) - A-SDM: Accelerating Stable Diffusion through Redundancy Removal and
Performance Optimization [54.113083217869516]
In this work, we first explore the computational redundancy part of the network.
We then prune the redundancy blocks of the model and maintain the network performance.
Thirdly, we propose a global-regional interactive (GRI) attention to speed up the computationally intensive attention part.
arXiv Detail & Related papers (2023-12-24T15:37:47Z) - A Deep Dive into the Connections Between the Renormalization Group and
Deep Learning in the Ising Model [0.0]
Renormalization group (RG) is an essential technique in statistical physics and quantum field theory.
We develop extensive renormalization techniques for the 1D and 2D Ising model to provide a baseline for comparison.
For the 2D Ising model, we successfully generated Ising model samples using the Wolff algorithm, and performed the group flow using a quasi-deterministic method.
arXiv Detail & Related papers (2023-08-21T22:50:54Z) - Hierarchical Integration Diffusion Model for Realistic Image Deblurring [71.76410266003917]
Diffusion models (DMs) have been introduced in image deblurring and exhibited promising performance.
We propose the Hierarchical Integration Diffusion Model (HI-Diff), for realistic image deblurring.
Experiments on synthetic and real-world blur datasets demonstrate that our HI-Diff outperforms state-of-the-art methods.
arXiv Detail & Related papers (2023-05-22T12:18:20Z) - Git Re-Basin: Merging Models modulo Permutation Symmetries [3.5450828190071655]
We show how simple algorithms can be used to fit large networks in practice.
We demonstrate the first (to our knowledge) demonstration of zero mode connectivity between independently trained models.
We also discuss shortcomings in the linear mode connectivity hypothesis.
arXiv Detail & Related papers (2022-09-11T10:44:27Z) - FOSTER: Feature Boosting and Compression for Class-Incremental Learning [52.603520403933985]
Deep neural networks suffer from catastrophic forgetting when learning new categories.
We propose a novel two-stage learning paradigm FOSTER, empowering the model to learn new categories adaptively.
arXiv Detail & Related papers (2022-04-10T11:38:33Z) - Accurate and Lightweight Image Super-Resolution with Model-Guided Deep
Unfolding Network [63.69237156340457]
We present and advocate an explainable approach toward SISR named model-guided deep unfolding network (MoG-DUN)
MoG-DUN is accurate (producing fewer aliasing artifacts), computationally efficient (with reduced model parameters), and versatile (capable of handling multiple degradations)
The superiority of the proposed MoG-DUN method to existing state-of-theart image methods including RCAN, SRDNF, and SRFBN is substantiated by extensive experiments on several popular datasets and various degradation scenarios.
arXiv Detail & Related papers (2020-09-14T08:23:37Z) - Deep Variational Network Toward Blind Image Restoration [60.45350399661175]
Blind image restoration is a common yet challenging problem in computer vision.
We propose a novel blind image restoration method, aiming to integrate both the advantages of them.
Experiments on two typical blind IR tasks, namely image denoising and super-resolution, demonstrate that the proposed method achieves superior performance over current state-of-the-arts.
arXiv Detail & Related papers (2020-08-25T03:30:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.