SAGE-NDVI: A Stereotype-Breaking Evaluation Metric for Remote Sensing
Image Dehazing Using Satellite-to-Ground NDVI Knowledge
- URL: http://arxiv.org/abs/2306.06288v1
- Date: Fri, 9 Jun 2023 22:29:42 GMT
- Title: SAGE-NDVI: A Stereotype-Breaking Evaluation Metric for Remote Sensing
Image Dehazing Using Satellite-to-Ground NDVI Knowledge
- Authors: Zepeng Liu, Zhicheng Yang, Mingye Zhu, Andy Wong, Yibing Wei, Mei Han,
Jun Yu, Jui-Hsin Lai
- Abstract summary: In our industrial deployment scenario based on remote sensing (RS) images, the quality of image dehazing directly affects the grade of our crop identification and growth monitoring products.
In this paper, we design a new objective metric for RS image dehazing evaluation.
- Score: 15.389028295437974
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Image dehazing is a meaningful low-level computer vision task and can be
applied to a variety of contexts. In our industrial deployment scenario based
on remote sensing (RS) images, the quality of image dehazing directly affects
the grade of our crop identification and growth monitoring products. However,
the widely used peak signal-to-noise ratio (PSNR) and structural similarity
index (SSIM) provide ambiguous visual interpretation. In this paper, we design
a new objective metric for RS image dehazing evaluation. Our proposed metric
leverages a ground-based phenology observation resource to calculate the
vegetation index error between RS and ground images at a hazy date. Extensive
experiments validate that our metric appropriately evaluates different dehazing
models and is in line with human visual perception.
Related papers
- Semi-Truths: A Large-Scale Dataset of AI-Augmented Images for Evaluating Robustness of AI-Generated Image detectors [62.63467652611788]
We introduce SEMI-TRUTHS, featuring 27,600 real images, 223,400 masks, and 1,472,700 AI-augmented images.
Each augmented image is accompanied by metadata for standardized and targeted evaluation of detector robustness.
Our findings suggest that state-of-the-art detectors exhibit varying sensitivities to the types and degrees of perturbations, data distributions, and augmentation methods used.
arXiv Detail & Related papers (2024-11-12T01:17:27Z) - Semantic Guided Large Scale Factor Remote Sensing Image Super-resolution with Generative Diffusion Prior [13.148815217684277]
Large scale factor super-resolution (SR) algorithms are vital for maximizing the utilization of low-resolution (LR) satellite data captured from orbit.
Existing methods confront challenges in recovering SR images with clear textures and correct ground objects.
We introduce a novel framework, the Semantic Guided Diffusion Model (SGDM), designed for large scale factor remote sensing image super-resolution.
arXiv Detail & Related papers (2024-05-11T16:06:16Z) - Estimating Physical Information Consistency of Channel Data Augmentation for Remote Sensing Images [3.063197102484114]
We propose an approach to estimate whether a channel augmentation technique affects the physical information of RS images.
We compare the scores associated with original and augmented pixel signatures to evaluate the physical consistency.
arXiv Detail & Related papers (2024-03-21T16:48:45Z) - Dehazed Image Quality Evaluation: From Partial Discrepancy to Blind
Perception [35.257798506356814]
Image dehazing aims to restore spatial details from hazy images.
We propose a Reduced-Reference dehazed image quality evaluation approach based on Partial Discrepancy.
We extend it to a No-Reference quality assessment metric with Blind Perception.
arXiv Detail & Related papers (2022-11-22T23:49:14Z) - Remote Sensing Image Classification using Transfer Learning and
Attention Based Deep Neural Network [59.86658316440461]
We propose a deep learning based framework for RSISC, which makes use of the transfer learning technique and multihead attention scheme.
The proposed deep learning framework is evaluated on the benchmark NWPU-RESISC45 dataset and achieves the best classification accuracy of 94.7%.
arXiv Detail & Related papers (2022-06-20T10:05:38Z) - A Multi-purpose Real Haze Benchmark with Quantifiable Haze Levels and
Ground Truth [61.90504318229845]
This paper introduces the first paired real image benchmark dataset with hazy and haze-free images, and in-situ haze density measurements.
This dataset was produced in a controlled environment with professional smoke generating machines that covered the entire scene.
A subset of this dataset has been used for the Object Detection in Haze Track of CVPR UG2 2022 challenge.
arXiv Detail & Related papers (2022-06-13T19:14:06Z) - Textural-Structural Joint Learning for No-Reference Super-Resolution
Image Quality Assessment [59.91741119995321]
We develop a dual stream network to jointly explore the textural and structural information for quality prediction, dubbed TSNet.
By mimicking the human vision system (HVS) that pays more attention to the significant areas of the image, we develop the spatial attention mechanism to make the visual-sensitive areas more distinguishable.
Experimental results show the proposed TSNet predicts the visual quality more accurate than the state-of-the-art IQA methods, and demonstrates better consistency with the human's perspective.
arXiv Detail & Related papers (2022-05-27T09:20:06Z) - Remote Sensing Image Classification with the SEN12MS Dataset [1.7894377200944511]
We present a classification-oriented conversion of the SEN12MS dataset.
Using that, we provide results for several baseline models based on two standard CNN architectures and different input data configurations.
Our results support the benchmarking of remote sensing image classification and provide insights to the benefit of multi-spectral data and multi-sensor data fusion over conventional RGB imagery.
arXiv Detail & Related papers (2021-04-01T18:15:16Z) - Semantics-aware Adaptive Knowledge Distillation for Sensor-to-Vision
Action Recognition [131.6328804788164]
We propose a framework, named Semantics-aware Adaptive Knowledge Distillation Networks (SAKDN), to enhance action recognition in vision-sensor modality (videos)
The SAKDN uses multiple wearable-sensors as teacher modalities and uses RGB videos as student modality.
arXiv Detail & Related papers (2020-09-01T03:38:31Z) - Beyond Photometric Consistency: Gradient-based Dissimilarity for
Improving Visual Odometry and Stereo Matching [46.27086269084186]
In this paper, we investigate a new metric for registering images that builds upon the idea of the photometric error.
We integrate both into stereo estimation as well as visual odometry systems and show clear benefits for typical disparity and direct image registration tasks.
arXiv Detail & Related papers (2020-04-08T16:13:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.