Automatic Detection of Natural Disaster Effect on Paddy Field from
Satellite Images using Deep Learning Techniques
- URL: http://arxiv.org/abs/2304.00622v1
- Date: Sun, 2 Apr 2023 20:37:22 GMT
- Title: Automatic Detection of Natural Disaster Effect on Paddy Field from
Satellite Images using Deep Learning Techniques
- Authors: Tahmid Alavi Ishmam, Amin Ahsan Ali, Md Ahsraful Amin, A K M Mahbubur
Rahman
- Abstract summary: This paper aims to detect rice field damage from natural disasters in Bangladesh using high-resolution satellite imagery.
Authors developed ground truth data for rice field damage from the field level.
- Score: 2.142991584970654
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper aims to detect rice field damage from natural disasters in
Bangladesh using high-resolution satellite imagery. The authors developed
ground truth data for rice field damage from the field level. At first, NDVI
differences before and after the disaster are calculated to identify possible
crop loss. The areas equal to and above the 0.33 threshold are marked as crop
loss areas as significant changes are observed. The authors also verified crop
loss areas by collecting data from local farmers. Later, different bands of
satellite data (Red, Green, Blue) and (False Color Infrared) are useful to
detect crop loss area. We used the NDVI different images as ground truth to
train the DeepLabV3plus model. With RGB, we got IoU 0.41 and with FCI, we got
IoU 0.51. As FCI uses NIR, Red, Blue bands and NDVI is normalized difference
between NIR and Red bands, so greater FCI's IoU score than RGB is expected. But
RGB does not perform very badly here. So, where other bands are not available,
RGB can use to understand crop loss areas to some extent. The ground truth
developed in this paper can be used for segmentation models with very high
resolution RGB only images such as Bing, Google etc.
Related papers
- A UAV-Based Multispectral and RGB Dataset for Multi-Stage Paddy Crop Monitoring in Indian Agricultural Fields [5.329135985749616]
We present a large-scale unmanned aerial vehicle (UAV)-based RGB and multispectral image dataset collected over paddy fields in the region, Andhra Pradesh, India.<n>We used a 20-megapixel RGB camera and a 5-megapixel four-band multispectral camera capturing red, green, red-edge, and near-infrared bands.<n>Our dataset comprises of 42,430 raw images (415 GB) captured over 5 acres with 1 cm/pixel ground sampling distance.
arXiv Detail & Related papers (2026-01-03T06:19:18Z) - SAGA: Semantic-Aware Gray color Augmentation for Visible-to-Thermal Domain Adaptation across Multi-View Drone and Ground-Based Vision Systems [1.891522135443594]
Domain-adaptive thermal object detection plays a key role in facilitating visible (RGB)-to-thermal (IR) adaptation.
In inherent limitations of IR images, such as the lack of color and texture cues, pose challenges for RGB-trained models.
We propose Semantic-Aware Gray color Augmentation (SAGA), a novel strategy for mitigating color bias and bridging the domain gap.
arXiv Detail & Related papers (2025-04-22T09:22:11Z) - HVI: A New Color Space for Low-light Image Enhancement [58.8280819306909]
We propose a new color space for Low-Light Image Enhancement (LLIE) based on Horizontal/Vertical-Intensity (HVI)
HVI is defined by polarized HS maps and learnable intensity, while the latter compresses the low-light regions to remove the black artifacts.
To fully leverage the chromatic and intensity information, a novel Color and Intensity Decoupling Network (CIDNet) is introduced.
arXiv Detail & Related papers (2025-02-27T16:59:51Z) - Towards RAW Object Detection in Diverse Conditions [65.30190654593842]
We introduce the AODRaw dataset, which offers 7,785 high-resolution real RAW images with 135,601 annotated instances spanning 62 categories.
We find that sRGB pre-training constrains the potential of RAW object detection due to the domain gap between sRGB and RAW.
We distill the knowledge from an off-the-shelf model pre-trained on the sRGB domain to assist RAW pre-training.
arXiv Detail & Related papers (2024-11-24T01:23:04Z) - BSRAW: Improving Blind RAW Image Super-Resolution [63.408484584265985]
We tackle blind image super-resolution in the RAW domain.
We design a realistic degradation pipeline tailored specifically for training models with raw sensor data.
Our BSRAW models trained with our pipeline can upscale real-scene RAW images and improve their quality.
arXiv Detail & Related papers (2023-12-24T14:17:28Z) - HalluciDet: Hallucinating RGB Modality for Person Detection Through Privileged Information [12.376615603048279]
HalluciDet is an IR-RGB image translation model for object detection.
We empirically compare our approach against state-of-the-art methods for image translation and for fine-tuning on IR.
arXiv Detail & Related papers (2023-10-07T03:00:33Z) - Edge-guided Multi-domain RGB-to-TIR image Translation for Training
Vision Tasks with Challenging Labels [12.701191873813583]
The insufficient number of annotated thermal infrared (TIR) image datasets hinders TIR image-based deep learning networks to have comparable performances to that of RGB.
We propose a modified multidomain RGB to TIR image translation model focused on edge preservation to employ annotated RGB images with challenging labels.
We have enabled the supervised learning of deep TIR image-based optical flow estimation and object detection that ameliorated in end point error by 56.5% on average and the best object detection mAP of 23.9% respectively.
arXiv Detail & Related papers (2023-01-30T06:44:38Z) - Reversed Image Signal Processing and RAW Reconstruction. AIM 2022
Challenge Report [109.2135194765743]
This paper introduces the AIM 2022 Challenge on Reversed Image Signal Processing and RAW Reconstruction.
We aim to recover raw sensor images from the corresponding RGBs without metadata and, by doing this, "reverse" the ISP transformation.
arXiv Detail & Related papers (2022-10-20T10:43:53Z) - Translation, Scale and Rotation: Cross-Modal Alignment Meets
RGB-Infrared Vehicle Detection [10.460296317901662]
We find detection in aerial RGB-IR images suffers from cross-modal weakly misalignment problems.
We propose a Translation-Scale-Rotation Alignment (TSRA) module to address the problem.
A two-stream feature alignment detector (TSFADet) based on the TSRA module is constructed for RGB-IR object detection in aerial images.
arXiv Detail & Related papers (2022-09-28T03:06:18Z) - Boosting RGB-D Saliency Detection by Leveraging Unlabeled RGB Images [89.81919625224103]
Training deep models for RGB-D salient object detection (SOD) often requires a large number of labeled RGB-D images.
We present a Dual-Semi RGB-D Salient Object Detection Network (DS-Net) to leverage unlabeled RGB images for boosting RGB-D saliency detection.
arXiv Detail & Related papers (2022-01-01T03:02:27Z) - Learning RAW-to-sRGB Mappings with Inaccurately Aligned Supervision [76.41657124981549]
This paper presents a joint learning model for image alignment and RAW-to-sRGB mapping.
Experiments show that our method performs favorably against state-of-the-arts on ZRR and SR-RAW datasets.
arXiv Detail & Related papers (2021-08-18T12:41:36Z) - Generation of the NIR spectral Band for Satellite Images with
Convolutional Neural Networks [0.0]
Deep neural networks allow generating artificial spectral information, such as for the image colorization problem.
We study the generative adversarial network (GAN) approach in the task of the NIR band generation using just RGB channels of high-resolution satellite imagery.
arXiv Detail & Related papers (2021-06-13T15:14:57Z) - Synergistic saliency and depth prediction for RGB-D saliency detection [76.27406945671379]
Existing RGB-D saliency datasets are small, which may lead to overfitting and limited generalization for diverse scenarios.
We propose a semi-supervised system for RGB-D saliency detection that can be trained on smaller RGB-D saliency datasets without saliency ground truth.
arXiv Detail & Related papers (2020-07-03T14:24:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.