Deep Amended Gradient Descent for Efficient Spectral Reconstruction from
Single RGB Images
- URL: http://arxiv.org/abs/2108.05547v1
- Date: Thu, 12 Aug 2021 05:54:09 GMT
- Title: Deep Amended Gradient Descent for Efficient Spectral Reconstruction from
Single RGB Images
- Authors: Zhiyu Zhu, Hui Liu, Junhui Hou, Sen Jia, and Qingfu Zhang
- Abstract summary: We propose a compact, efficient, and end-to-end learning-based framework, namely AGD-Net.
We first formulate the problem explicitly based on the classic gradient descent algorithm.
AGD-Net can improve the reconstruction quality by more than 1.0 dB on average.
- Score: 42.26124628784883
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper investigates the problem of recovering hyperspectral (HS) images
from single RGB images. To tackle such a severely ill-posed problem, we propose
a physically-interpretable, compact, efficient, and end-to-end learning-based
framework, namely AGD-Net. Precisely, by taking advantage of the imaging
process, we first formulate the problem explicitly based on the classic
gradient descent algorithm. Then, we design a lightweight neural network with a
multi-stage architecture to mimic the formed amended gradient descent process,
in which efficient convolution and novel spectral zero-mean normalization are
proposed to effectively extract spatial-spectral features for regressing an
initialization, a basic gradient, and an incremental gradient. Besides, based
on the approximate low-rank property of HS images, we propose a novel rank loss
to promote the similarity between the global structures of reconstructed and
ground-truth HS images, which is optimized with our singular value weighting
strategy during training. Moreover, AGD-Net, a single network after one-time
training, is flexible to handle the reconstruction with various spectral
response functions. Extensive experiments over three commonly-used benchmark
datasets demonstrate that AGD-Net can improve the reconstruction quality by
more than 1.0 dB on average while saving 67$\times$ parameters and 32$\times$
FLOPs, compared with state-of-the-art methods. The code will be publicly
available at https://github.com/zbzhzhy/GD-Net.
Related papers
- DGNet: Dynamic Gradient-Guided Network for Water-Related Optics Image
Enhancement [77.0360085530701]
Underwater image enhancement (UIE) is a challenging task due to the complex degradation caused by underwater environments.
Previous methods often idealize the degradation process, and neglect the impact of medium noise and object motion on the distribution of image features.
Our approach utilizes predicted images to dynamically update pseudo-labels, adding a dynamic gradient to optimize the network's gradient space.
arXiv Detail & Related papers (2023-12-12T06:07:21Z) - Distance Weighted Trans Network for Image Completion [52.318730994423106]
We propose a new architecture that relies on Distance-based Weighted Transformer (DWT) to better understand the relationships between an image's components.
CNNs are used to augment the local texture information of coarse priors.
DWT blocks are used to recover certain coarse textures and coherent visual structures.
arXiv Detail & Related papers (2023-10-11T12:46:11Z) - Nest-DGIL: Nesterov-optimized Deep Geometric Incremental Learning for CS
Image Reconstruction [9.54126979075279]
We propose a deep geometric incremental learning framework based on the second Nesterov proximal gradient optimization.
Our reconstruction framework is decomposed into four modules including general linear reconstruction, cascade geometric incremental restoration, Nesterov acceleration, and post-processing.
arXiv Detail & Related papers (2023-08-06T15:47:03Z) - Deep Generalized Unfolding Networks for Image Restoration [16.943609020362395]
We propose a Deep Generalized Unfolding Network (DGUNet) for image restoration.
We integrate a gradient estimation strategy into the gradient descent step of the Proximal Gradient Descent (PGD) algorithm.
Our method is superior in terms of state-of-the-art performance, interpretability, and generalizability.
arXiv Detail & Related papers (2022-04-28T08:39:39Z) - DeepRLS: A Recurrent Network Architecture with Least Squares Implicit
Layers for Non-blind Image Deconvolution [15.986942312624]
We study the problem of non-blind image deconvolution.
We propose a novel recurrent network architecture that leads to very competitive restoration results of high image quality.
arXiv Detail & Related papers (2021-12-10T13:16:51Z) - Spatially-Adaptive Image Restoration using Distortion-Guided Networks [51.89245800461537]
We present a learning-based solution for restoring images suffering from spatially-varying degradations.
We propose SPAIR, a network design that harnesses distortion-localization information and dynamically adjusts to difficult regions in the image.
arXiv Detail & Related papers (2021-08-19T11:02:25Z) - LAPAR: Linearly-Assembled Pixel-Adaptive Regression Network for Single
Image Super-Resolution and Beyond [75.37541439447314]
Single image super-resolution (SISR) deals with a fundamental problem of upsampling a low-resolution (LR) image to its high-resolution (HR) version.
This paper proposes a linearly-assembled pixel-adaptive regression network (LAPAR) to strike a sweet spot of deep model complexity and resulting SISR quality.
arXiv Detail & Related papers (2021-05-21T15:47:18Z) - Progressively Guided Alternate Refinement Network for RGB-D Salient
Object Detection [63.18846475183332]
We aim to develop an efficient and compact deep network for RGB-D salient object detection.
We propose a progressively guided alternate refinement network to refine it.
Our model outperforms existing state-of-the-art approaches by a large margin.
arXiv Detail & Related papers (2020-08-17T02:55:06Z) - A deep primal-dual proximal network for image restoration [8.797434238081372]
We design a deep network, named DeepPDNet, built from primal-dual iterations associated with the minimization of a standard penalized likelihood with an analysis prior.
Two different learning strategies: "Full learning" and "Partial learning" are proposed, the first one is the most efficient numerically.
Extensive results show that the proposed DeepPDNet demonstrates excellent performance on the MNIST and the more complex BSD68, BSD100, and SET14 datasets for image restoration and single image super-resolution task.
arXiv Detail & Related papers (2020-07-02T08:29:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.