Semantic-embedded Unsupervised Spectral Reconstruction from Single RGB
Images in the Wild
- URL: http://arxiv.org/abs/2108.06659v2
- Date: Tue, 17 Aug 2021 03:36:48 GMT
- Title: Semantic-embedded Unsupervised Spectral Reconstruction from Single RGB
Images in the Wild
- Authors: Zhiyu Zhu, Hui Liu, Junhui Hou, Huanqiang Zeng, Qingfu Zhang
- Abstract summary: We propose a new lightweight and end-to-end learning-based framework to tackle this challenge.
We progressively spread the differences between input RGB images and re-projected RGB images from recovered HS images via effective camera spectral response function estimation.
Our method significantly outperforms state-of-the-art unsupervised methods and even exceeds the latest supervised method under some settings.
- Score: 48.44194221801609
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper investigates the problem of reconstructing hyperspectral (HS)
images from single RGB images captured by commercial cameras, \textbf{without}
using paired HS and RGB images during training. To tackle this challenge, we
propose a new lightweight and end-to-end learning-based framework.
Specifically, on the basis of the intrinsic imaging degradation model of RGB
images from HS images, we progressively spread the differences between input
RGB images and re-projected RGB images from recovered HS images via effective
unsupervised camera spectral response function estimation. To enable the
learning without paired ground-truth HS images as supervision, we adopt the
adversarial learning manner and boost it with a simple yet effective
$\mathcal{L}_1$ gradient clipping scheme. Besides, we embed the semantic
information of input RGB images to locally regularize the unsupervised
learning, which is expected to promote pixels with identical semantics to have
consistent spectral signatures. In addition to conducting quantitative
experiments over two widely-used datasets for HS image reconstruction from
synthetic RGB images, we also evaluate our method by applying recovered HS
images from real RGB images to HS-based visual tracking. Extensive results show
that our method significantly outperforms state-of-the-art unsupervised methods
and even exceeds the latest supervised method under some settings. The source
code is public available at
https://github.com/zbzhzhy/Unsupervised-Spectral-Reconstruction.
Related papers
- A Learnable Color Correction Matrix for RAW Reconstruction [19.394856071610604]
We introduce a learnable color correction matrix (CCM) to approximate the complex inverse image signal processor (ISP)
Experimental results demonstrate that simulated RAW (simRAW) images generated by our method provide performance improvements equivalent to those produced by more complex inverse ISP methods.
arXiv Detail & Related papers (2024-09-04T07:46:42Z) - Modular Anti-noise Deep Learning Network for Robotic Grasp Detection
Based on RGB Images [2.759223695383734]
This paper introduces an interesting approach to detect grasping pose from a single RGB image.
We propose a modular learning network augmented with grasp detection and semantic segmentation.
We demonstrate the feasibility and accuracy of our proposed approach through practical experiments and evaluations.
arXiv Detail & Related papers (2023-10-30T02:01:49Z) - Symmetric Uncertainty-Aware Feature Transmission for Depth
Super-Resolution [52.582632746409665]
We propose a novel Symmetric Uncertainty-aware Feature Transmission (SUFT) for color-guided DSR.
Our method achieves superior performance compared to state-of-the-art methods.
arXiv Detail & Related papers (2023-06-01T06:35:59Z) - Learning to Recover Spectral Reflectance from RGB Images [20.260831758913902]
spectral reflectance recovery (SRR) from RGB images is challenging and costly.
Most existing approaches are trained on synthetic images and utilize the same parameters for all unseen testing images.
We propose a self-supervised meta-auxiliary learning (MAXL) strategy that fine-tunes the well-trained network parameters with each testing image to combine external with internal information.
arXiv Detail & Related papers (2023-04-04T23:27:02Z) - Raw Image Reconstruction with Learned Compact Metadata [61.62454853089346]
We propose a novel framework to learn a compact representation in the latent space serving as the metadata in an end-to-end manner.
We show how the proposed raw image compression scheme can adaptively allocate more bits to image regions that are important from a global perspective.
arXiv Detail & Related papers (2023-02-25T05:29:45Z) - Boosting RGB-D Saliency Detection by Leveraging Unlabeled RGB Images [89.81919625224103]
Training deep models for RGB-D salient object detection (SOD) often requires a large number of labeled RGB-D images.
We present a Dual-Semi RGB-D Salient Object Detection Network (DS-Net) to leverage unlabeled RGB images for boosting RGB-D saliency detection.
arXiv Detail & Related papers (2022-01-01T03:02:27Z) - Learning RAW-to-sRGB Mappings with Inaccurately Aligned Supervision [76.41657124981549]
This paper presents a joint learning model for image alignment and RAW-to-sRGB mapping.
Experiments show that our method performs favorably against state-of-the-arts on ZRR and SR-RAW datasets.
arXiv Detail & Related papers (2021-08-18T12:41:36Z) - Hierarchical Regression Network for Spectral Reconstruction from RGB
Images [21.551899202524904]
We propose a 4-level Hierarchical Regression Network (HRNet) with PixelShuffle layer as inter-level interaction.
We evaluate proposed HRNet with other architectures and techniques by participating in NTIRE 2020 Challenge on Spectral Reconstruction from RGB Images.
arXiv Detail & Related papers (2020-05-10T16:06:11Z) - NTIRE 2020 Challenge on Spectral Reconstruction from an RGB Image [61.71186808848108]
This paper reviews the second challenge on spectral reconstruction from RGB images.
It recovers whole-scene hyperspectral (HS) information from a 3-channel RGB image.
A new, larger-than-ever, natural hyperspectral image data set is presented.
arXiv Detail & Related papers (2020-05-07T12:23:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.