Generation of the NIR spectral Band for Satellite Images with
Convolutional Neural Networks
- URL: http://arxiv.org/abs/2106.07020v1
- Date: Sun, 13 Jun 2021 15:14:57 GMT
- Title: Generation of the NIR spectral Band for Satellite Images with
Convolutional Neural Networks
- Authors: Svetlana Illarionova, Dmitrii Shadrin, Alexey Trekin, Vladimir
Ignatiev, Ivan Oseledets
- Abstract summary: Deep neural networks allow generating artificial spectral information, such as for the image colorization problem.
We study the generative adversarial network (GAN) approach in the task of the NIR band generation using just RGB channels of high-resolution satellite imagery.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The near-infrared (NIR) spectral range (from 780 to 2500 nm) of the
multispectral remote sensing imagery provides vital information for the
landcover classification, especially concerning the vegetation assessment.
Despite the usefulness of NIR, common RGB is not always accompanied by it.
Modern achievements in image processing via deep neural networks allow
generating artificial spectral information, such as for the image colorization
problem. In this research, we aim to investigate whether this approach can
produce not only visually similar images but also an artificial spectral band
that can improve the performance of computer vision algorithms for solving
remote sensing tasks. We study the generative adversarial network (GAN)
approach in the task of the NIR band generation using just RGB channels of
high-resolution satellite imagery. We evaluate the impact of a generated
channel on the model performance for solving the forest segmentation task. Our
results show an increase in model accuracy when using generated NIR comparing
to the baseline model that uses only RGB (0.947 and 0.914 F1-score
accordingly). Conducted study shows the advantages of generating the extra band
and its implementation in applied challenges reducing the required amount of
labeled data.
Related papers
- Towards RGB-NIR Cross-modality Image Registration and Beyond [21.475871648254564]
This paper focuses on the area of RGB(visible)-NIR(near-infrared) cross-modality image registration.
We first present the RGB-NIR Image Registration (RGB-NIR-IRegis) benchmark, which, for the first time, enables fair and comprehensive evaluations.
We then design several metrics to reveal the toxic impact of inconsistent local features between visible and infrared images on the model performance.
arXiv Detail & Related papers (2024-05-30T10:25:50Z) - Near-Infrared and Low-Rank Adaptation of Vision Transformers in Remote Sensing [3.2088888904556123]
Plant health can be monitored dynamically using multispectral sensors that measure Near-Infrared reflectance (NIR)
Despite this potential, obtaining and annotating high-resolution NIR images poses a significant challenge for training deep neural networks.
This study investigates the potential benefits of using vision transformer (ViT) backbones pre-trained in the RGB domain, with low-rank adaptation for downstream tasks in the NIR domain.
arXiv Detail & Related papers (2024-05-28T07:24:07Z) - NIR-Assisted Image Denoising: A Selective Fusion Approach and A Real-World Benchmark Dataset [53.79524776100983]
Leveraging near-infrared (NIR) images to assist visible RGB image denoising shows the potential to address this issue.
Existing works still struggle with taking advantage of NIR information effectively for real-world image denoising.
We propose an efficient Selective Fusion Module (SFM), which can be plug-and-played into the advanced denoising networks.
arXiv Detail & Related papers (2024-04-12T14:54:26Z) - SIRST-5K: Exploring Massive Negatives Synthesis with Self-supervised
Learning for Robust Infrared Small Target Detection [53.19618419772467]
Single-frame infrared small target (SIRST) detection aims to recognize small targets from clutter backgrounds.
With the development of Transformer, the scale of SIRST models is constantly increasing.
With a rich diversity of infrared small target data, our algorithm significantly improves the model performance and convergence speed.
arXiv Detail & Related papers (2024-03-08T16:14:54Z) - Tensor Factorization for Leveraging Cross-Modal Knowledge in
Data-Constrained Infrared Object Detection [22.60228799622782]
Key bottleneck in object detection in IR images is lack of sufficient labeled training data.
We seek to leverage cues from the RGB modality to scale object detectors to the IR modality, while preserving model performance in the RGB modality.
We first pretrain these factor matrices on the RGB modality, for which plenty of training data are assumed to exist and then augment only a few trainable parameters for training on the IR modality to avoid over-fitting.
arXiv Detail & Related papers (2023-09-28T16:55:52Z) - Visible and infrared self-supervised fusion trained on a single example [1.1188842018827656]
Multispectral imaging is important task of image processing and computer vision.
Problem of visible (RGB) to Near Infrared (NIR) image fusion has become particularly timely.
Proposed approach fuses these two channels by training a Convolutional Neural Network by Self Supervised Learning (SSL) on a single example.
Experiments demonstrate that the proposed approach achieves similar or better qualitative and quantitative multispectral fusion results.
arXiv Detail & Related papers (2023-07-09T05:25:46Z) - Boosting RGB-D Saliency Detection by Leveraging Unlabeled RGB Images [89.81919625224103]
Training deep models for RGB-D salient object detection (SOD) often requires a large number of labeled RGB-D images.
We present a Dual-Semi RGB-D Salient Object Detection Network (DS-Net) to leverage unlabeled RGB images for boosting RGB-D saliency detection.
arXiv Detail & Related papers (2022-01-01T03:02:27Z) - NeRF-SR: High-Quality Neural Radiance Fields using Super-Sampling [82.99453001445478]
We present NeRF-SR, a solution for high-resolution (HR) novel view synthesis with mostly low-resolution (LR) inputs.
Our method is built upon Neural Radiance Fields (NeRF) that predicts per-point density and color with a multi-layer perceptron.
arXiv Detail & Related papers (2021-12-03T07:33:47Z) - Learning Neural Light Fields with Ray-Space Embedding Networks [51.88457861982689]
We propose a novel neural light field representation that is compact and directly predicts integrated radiance along rays.
Our method achieves state-of-the-art quality on dense forward-facing datasets such as the Stanford Light Field dataset.
arXiv Detail & Related papers (2021-12-02T18:59:51Z) - MobileSal: Extremely Efficient RGB-D Salient Object Detection [62.04876251927581]
This paper introduces a novel network, methodname, which focuses on efficient RGB-D salient object detection (SOD)
We propose an implicit depth restoration (IDR) technique to strengthen the feature representation capability of mobile networks for RGB-D SOD.
With IDR and CPR incorporated, methodnameperforms favorably against sArt methods on seven challenging RGB-D SOD datasets.
arXiv Detail & Related papers (2020-12-24T04:36:42Z) - AdaptiveWeighted Attention Network with Camera Spectral Sensitivity
Prior for Spectral Reconstruction from RGB Images [22.26917280683572]
We propose a novel adaptive weighted attention network (AWAN) for spectral reconstruction.
AWCA and PSNL modules are developed to reallocate channel-wise feature responses.
In the NTIRE 2020 Spectral Reconstruction Challenge, our entries obtain the 1st ranking on the Clean track and the 3rd place on the Real World track.
arXiv Detail & Related papers (2020-05-19T09:21:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.