Boosting Image Super-Resolution Via Fusion of Complementary Information
Captured by Multi-Modal Sensors
- URL: http://arxiv.org/abs/2012.03417v1
- Date: Mon, 7 Dec 2020 02:15:28 GMT
- Title: Boosting Image Super-Resolution Via Fusion of Complementary Information
Captured by Multi-Modal Sensors
- Authors: Fan Wang, Jiangxin Yang, Yanlong Cao, Yanpeng Cao, and Michael Ying
Yang
- Abstract summary: Image Super-Resolution (SR) provides a promising technique to enhance the image quality of low-resolution optical sensors.
In this paper, we attempt to leverage complementary information from a low-cost channel (visible/depth) to boost image quality of an expensive channel (thermal) using fewer parameters.
- Score: 21.264746234523678
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image Super-Resolution (SR) provides a promising technique to enhance the
image quality of low-resolution optical sensors, facilitating better-performing
target detection and autonomous navigation in a wide range of robotics
applications. It is noted that the state-of-the-art SR methods are typically
trained and tested using single-channel inputs, neglecting the fact that the
cost of capturing high-resolution images in different spectral domains varies
significantly. In this paper, we attempt to leverage complementary information
from a low-cost channel (visible/depth) to boost image quality of an expensive
channel (thermal) using fewer parameters. To this end, we first present an
effective method to virtually generate pixel-wise aligned visible and thermal
images based on real-time 3D reconstruction of multi-modal data captured at
various viewpoints. Then, we design a feature-level multispectral fusion
residual network model to perform high-accuracy SR of thermal images by
adaptively integrating co-occurrence features presented in multispectral
images. Experimental results demonstrate that this new approach can effectively
alleviate the ill-posed inverse problem of image SR by taking into account
complementary information from an additional low-cost channel, significantly
outperforming state-of-the-art SR approaches in terms of both accuracy and
efficiency.
Related papers
- Effective Diffusion Transformer Architecture for Image Super-Resolution [63.254644431016345]
We design an effective diffusion transformer for image super-resolution (DiT-SR)
In practice, DiT-SR leverages an overall U-shaped architecture, and adopts a uniform isotropic design for all the transformer blocks.
We analyze the limitation of the widely used AdaLN, and present a frequency-adaptive time-step conditioning module.
arXiv Detail & Related papers (2024-09-29T07:14:16Z) - Semantic Guided Large Scale Factor Remote Sensing Image Super-resolution with Generative Diffusion Prior [13.148815217684277]
Large scale factor super-resolution (SR) algorithms are vital for maximizing the utilization of low-resolution (LR) satellite data captured from orbit.
Existing methods confront challenges in recovering SR images with clear textures and correct ground objects.
We introduce a novel framework, the Semantic Guided Diffusion Model (SGDM), designed for large scale factor remote sensing image super-resolution.
arXiv Detail & Related papers (2024-05-11T16:06:16Z) - Efficient Model Agnostic Approach for Implicit Neural Representation
Based Arbitrary-Scale Image Super-Resolution [5.704360536038803]
Single image super-resolution (SISR) has experienced significant advancements, primarily driven by deep convolutional networks.
Traditional networks are limited to upscaling images to a fixed scale, leading to the utilization of implicit neural functions for generating arbitrarily scaled images.
We introduce a novel and efficient framework, the Mixture of Experts Implicit Super-Resolution (MoEISR), which enables super-resolution at arbitrary scales.
arXiv Detail & Related papers (2023-11-20T05:34:36Z) - Hybrid-Supervised Dual-Search: Leveraging Automatic Learning for
Loss-free Multi-Exposure Image Fusion [60.221404321514086]
Multi-exposure image fusion (MEF) has emerged as a prominent solution to address the limitations of digital imaging in representing varied exposure levels.
This paper presents a Hybrid-Supervised Dual-Search approach for MEF, dubbed HSDS-MEF, which introduces a bi-level optimization search scheme for automatic design of both network structures and loss functions.
arXiv Detail & Related papers (2023-09-03T08:07:26Z) - ESSAformer: Efficient Transformer for Hyperspectral Image
Super-resolution [76.7408734079706]
Single hyperspectral image super-resolution (single-HSI-SR) aims to restore a high-resolution hyperspectral image from a low-resolution observation.
We propose ESSAformer, an ESSA attention-embedded Transformer network for single-HSI-SR with an iterative refining structure.
arXiv Detail & Related papers (2023-07-26T07:45:14Z) - Learning from Multi-Perception Features for Real-Word Image
Super-resolution [87.71135803794519]
We propose a novel SR method called MPF-Net that leverages multiple perceptual features of input images.
Our method incorporates a Multi-Perception Feature Extraction (MPFE) module to extract diverse perceptual information.
We also introduce a contrastive regularization term (CR) that improves the model's learning capability.
arXiv Detail & Related papers (2023-05-26T07:35:49Z) - DCS-RISR: Dynamic Channel Splitting for Efficient Real-world Image
Super-Resolution [15.694407977871341]
Real-world image super-resolution (RISR) has received increased focus for improving the quality of SR images under unknown complex degradation.
Existing methods rely on the heavy SR models to enhance low-resolution (LR) images of different degradation levels.
We propose a novel Dynamic Channel Splitting scheme for efficient Real-world Image Super-Resolution, termed DCS-RISR.
arXiv Detail & Related papers (2022-12-15T04:34:57Z) - Learning Enriched Features for Fast Image Restoration and Enhancement [166.17296369600774]
This paper presents a holistic goal of maintaining spatially-precise high-resolution representations through the entire network.
We learn an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details.
Our approach achieves state-of-the-art results for a variety of image processing tasks, including defocus deblurring, image denoising, super-resolution, and image enhancement.
arXiv Detail & Related papers (2022-04-19T17:59:45Z) - Multi-image Super Resolution of Remotely Sensed Images using Residual
Feature Attention Deep Neural Networks [1.3764085113103222]
The presented research proposes a novel residual attention model (RAMS) that efficiently tackles the multi-image super-resolution task.
We introduce the mechanism of visual feature attention with 3D convolutions in order to obtain an aware data fusion and information extraction.
Our representation learning network makes extensive use of nestled residual connections to let flow redundant low-frequency signals.
arXiv Detail & Related papers (2020-07-06T22:54:02Z) - Hyperspectral Image Super-resolution via Deep Spatio-spectral
Convolutional Neural Networks [32.10057746890683]
We propose a simple and efficient architecture for deep convolutional neural networks to fuse a low-resolution hyperspectral image and a high-resolution multispectral image.
The proposed network architecture achieves best performance compared with recent state-of-the-art hyperspectral image super-resolution approaches.
arXiv Detail & Related papers (2020-05-29T05:56:50Z) - Learning Enriched Features for Real Image Restoration and Enhancement [166.17296369600774]
convolutional neural networks (CNNs) have achieved dramatic improvements over conventional approaches for image restoration task.
We present a novel architecture with the collective goals of maintaining spatially-precise high-resolution representations through the entire network.
Our approach learns an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details.
arXiv Detail & Related papers (2020-03-15T11:04:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.