Adaptive Loss Function for Super Resolution Neural Networks Using Convex
Optimization Techniques
- URL: http://arxiv.org/abs/2001.07766v1
- Date: Tue, 21 Jan 2020 20:31:10 GMT
- Title: Adaptive Loss Function for Super Resolution Neural Networks Using Convex
Optimization Techniques
- Authors: Seyed Mehdi Ayyoubzadeh, Xiaolin Wu
- Abstract summary: Single Image Super-Resolution (SISR) task refers to learn a mapping from low-resolution images to the corresponding high-resolution ones.
CNNs are encouraged to learn high-frequency components of the images as well as low-frequency components.
We have shown that the proposed method can recover fine details of the images and it is stable in the training process.
- Score: 24.582559317893274
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Single Image Super-Resolution (SISR) task refers to learn a mapping from
low-resolution images to the corresponding high-resolution ones. This task is
known to be extremely difficult since it is an ill-posed problem. Recently,
Convolutional Neural Networks (CNNs) have achieved state of the art performance
on SISR. However, the images produced by CNNs do not contain fine details of
the images. Generative Adversarial Networks (GANs) aim to solve this issue and
recover sharp details. Nevertheless, GANs are notoriously difficult to train.
Besides that, they generate artifacts in the high-resolution images. In this
paper, we have proposed a method in which CNNs try to align images in different
spaces rather than only the pixel space. Such a space is designed using convex
optimization techniques. CNNs are encouraged to learn high-frequency components
of the images as well as low-frequency components. We have shown that the
proposed method can recover fine details of the images and it is stable in the
training process.
Related papers
- Distance Weighted Trans Network for Image Completion [52.318730994423106]
We propose a new architecture that relies on Distance-based Weighted Transformer (DWT) to better understand the relationships between an image's components.
CNNs are used to augment the local texture information of coarse priors.
DWT blocks are used to recover certain coarse textures and coherent visual structures.
arXiv Detail & Related papers (2023-10-11T12:46:11Z) - T-former: An Efficient Transformer for Image Inpainting [50.43302925662507]
A class of attention-based network architectures, called transformer, has shown significant performance on natural language processing fields.
In this paper, we design a novel attention linearly related to the resolution according to Taylor expansion, and based on this attention, a network called $T$-former is designed for image inpainting.
Experiments on several benchmark datasets demonstrate that our proposed method achieves state-of-the-art accuracy while maintaining a relatively low number of parameters and computational complexity.
arXiv Detail & Related papers (2023-05-12T04:10:42Z) - Improved Super Resolution of MR Images Using CNNs and Vision
Transformers [5.6512908295414]
Vision transformers (ViT) learn better global context that is helpful in generating superior quality HR images.
We combine local information of CNNs and global information from ViTs for image super resolution and output super resolved images.
arXiv Detail & Related papers (2022-07-24T14:01:52Z) - Image Super-resolution with An Enhanced Group Convolutional Neural
Network [102.2483249598621]
CNNs with strong learning abilities are widely chosen to resolve super-resolution problem.
We present an enhanced super-resolution group CNN (ESRGCNN) with a shallow architecture.
Experiments report that our ESRGCNN surpasses the state-of-the-arts in terms of SISR performance, complexity, execution speed, image quality evaluation and visual effect in SISR.
arXiv Detail & Related papers (2022-05-29T00:34:25Z) - Learning Enriched Features for Fast Image Restoration and Enhancement [166.17296369600774]
This paper presents a holistic goal of maintaining spatially-precise high-resolution representations through the entire network.
We learn an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details.
Our approach achieves state-of-the-art results for a variety of image processing tasks, including defocus deblurring, image denoising, super-resolution, and image enhancement.
arXiv Detail & Related papers (2022-04-19T17:59:45Z) - Deep Unrolled Network for Video Super-Resolution [0.45880283710344055]
Video super-resolution (VSR) aims to reconstruct a sequence of high-resolution (HR) images from their corresponding low-resolution (LR) versions.
Traditionally, solving a VSR problem has been based on iterative algorithms that exploit prior knowledge on image formation and assumptions on the motion.
Deep learning (DL) algorithms can efficiently learn spatial patterns from large collections of images.
We propose a new VSR neural network based on unrolled optimization techniques and discuss its performance.
arXiv Detail & Related papers (2021-02-23T14:35:09Z) - Image Restoration by Deep Projected GSURE [115.57142046076164]
Ill-posed inverse problems appear in many image processing applications, such as deblurring and super-resolution.
We propose a new image restoration framework that is based on minimizing a loss function that includes a "projected-version" of the Generalized SteinUnbiased Risk Estimator (GSURE) and parameterization of the latent image by a CNN.
arXiv Detail & Related papers (2021-02-04T08:52:46Z) - Neural Sparse Representation for Image Restoration [116.72107034624344]
Inspired by the robustness and efficiency of sparse coding based image restoration models, we investigate the sparsity of neurons in deep networks.
Our method structurally enforces sparsity constraints upon hidden neurons.
Experiments show that sparse representation is crucial in deep neural networks for multiple image restoration tasks.
arXiv Detail & Related papers (2020-06-08T05:15:17Z) - Self-supervised Fine-tuning for Correcting Super-Resolution
Convolutional Neural Networks [17.922507191213494]
We show that one can avoid training and correct for SR results with a fully self-supervised fine-tuning approach.
We apply our fine-tuning algorithm on multiple image and video SR CNNs and show that it can successfully correct for a sub-optimal SR solution.
arXiv Detail & Related papers (2019-12-30T11:02:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.