Improved Flood Insights: Diffusion-Based SAR to EO Image Translation
- URL: http://arxiv.org/abs/2307.07123v1
- Date: Fri, 14 Jul 2023 02:19:23 GMT
- Title: Improved Flood Insights: Diffusion-Based SAR to EO Image Translation
- Authors: Minseok Seo, Youngtack Oh, Doyi Kim, Dongmin Kang, Yeji Choi
- Abstract summary: This paper introduces a novel framework, Diffusion-Based SAR to EO Image Translation (DSE)
The DSE framework converts SAR images into EO images, thereby enhancing the interpretability of flood insights for humans.
Experimental results on the Sen1Floods11 and SEN12-FLOOD datasets confirm that the DSE framework not only delivers enhanced visual information but also improves performance.
- Score: 4.994315051443544
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Driven by rapid climate change, the frequency and intensity of flood events
are increasing. Electro-Optical (EO) satellite imagery is commonly utilized for
rapid response. However, its utilities in flood situations are hampered by
issues such as cloud cover and limitations during nighttime, making accurate
assessment of damage challenging. Several alternative flood detection
techniques utilizing Synthetic Aperture Radar (SAR) data have been proposed.
Despite the advantages of SAR over EO in the aforementioned situations, SAR
presents a distinct drawback: human analysts often struggle with data
interpretation. To tackle this issue, this paper introduces a novel framework,
Diffusion-Based SAR to EO Image Translation (DSE). The DSE framework converts
SAR images into EO images, thereby enhancing the interpretability of flood
insights for humans. Experimental results on the Sen1Floods11 and SEN12-FLOOD
datasets confirm that the DSE framework not only delivers enhanced visual
information but also improves performance across all tested flood segmentation
baselines.
Related papers
- C-DiffSET: Leveraging Latent Diffusion for SAR-to-EO Image Translation with Confidence-Guided Reliable Object Generation [23.63992950769041]
C-DiffSET is a framework leveraging pretrained Latent Diffusion Model (LDM) extensively trained on natural images.
Remarkably, we find that the pretrained VAE encoder aligns SAR and EO images in the same latent space, even with varying noise levels in SAR inputs.
arXiv Detail & Related papers (2024-11-16T12:28:40Z) - Electrooptical Image Synthesis from SAR Imagery Using Generative Adversarial Networks [0.0]
This research contributes to the field of remote sensing by bridging the gap between SAR and EO imagery.
The results show significant improvements in interpretability, making SAR data more accessible for analysts familiar with EO imagery.
Our research contributes to the field of remote sensing by bridging the gap between SAR and EO imagery, offering a novel tool for enhanced data interpretation.
arXiv Detail & Related papers (2024-09-07T14:31:46Z) - Real-Time Multi-Scene Visibility Enhancement for Promoting Navigational Safety of Vessels Under Complex Weather Conditions [48.529493393948435]
The visible-light camera has emerged as an essential imaging sensor for marine surface vessels in intelligent waterborne transportation systems.
The visual imaging quality inevitably suffers from several kinds of degradations under complex weather conditions.
We develop a general-purpose multi-scene visibility enhancement method to restore degraded images captured under different weather conditions.
arXiv Detail & Related papers (2024-09-02T23:46:27Z) - Conditional Brownian Bridge Diffusion Model for VHR SAR to Optical Image Translation [5.578820789388206]
This paper introduces a conditional image-to-image translation approach based on Brownian Bridge Diffusion Model (BBDM)
We conducted comprehensive experiments on the MSAW dataset, a paired SAR and optical images collection of 0.5m Very-High-Resolution (VHR)
arXiv Detail & Related papers (2024-08-15T05:43:46Z) - Causality-informed Rapid Post-hurricane Building Damage Detection in
Large Scale from InSAR Imagery [6.331801334141028]
Timely and accurate assessment of hurricane-induced building damage is crucial for effective post-hurricane response and recovery efforts.
Recently, remote sensing technologies provide large-scale optical or Interferometric Synthetic Aperture Radar (InSAR) imagery data immediately after a disastrous event.
These InSAR imageries often contain highly noisy and mixed signals induced by co-occurring or co-located building damage, flood, flood/wind-induced vegetation changes, as well as anthropogenic activities.
This paper introduces an approach for rapid post-hurricane building damage detection from InSAR imagery.
arXiv Detail & Related papers (2023-10-02T18:56:05Z) - SAR Despeckling using a Denoising Diffusion Probabilistic Model [52.25981472415249]
The presence of speckle degrades the image quality and adversely affects the performance of SAR image understanding applications.
We introduce SAR-DDPM, a denoising diffusion probabilistic model for SAR despeckling.
The proposed method achieves significant improvements in both quantitative and qualitative results over the state-of-the-art despeckling methods.
arXiv Detail & Related papers (2022-06-09T14:00:26Z) - Visualization of Deep Transfer Learning In SAR Imagery [0.0]
We consider transfer learning to leverage deep features from a network trained on an EO ships dataset.
By exploring the network activations in the form of class-activation maps, we gain insight on how a deep network interprets a new modality.
arXiv Detail & Related papers (2021-03-20T00:16:15Z) - Progressive Depth Learning for Single Image Dehazing [56.71963910162241]
Existing dehazing methods often ignore the depth cues and fail in distant areas where heavier haze disturbs the visibility.
We propose a deep end-to-end model that iteratively estimates image depths and transmission maps.
Our approach benefits from explicitly modeling the inner relationship of image depth and transmission map, which is especially effective for distant hazy areas.
arXiv Detail & Related papers (2021-02-21T05:24:18Z) - Frequency Consistent Adaptation for Real World Super Resolution [64.91914552787668]
We propose a novel Frequency Consistent Adaptation (FCA) that ensures the frequency domain consistency when applying Super-Resolution (SR) methods to the real scene.
We estimate degradation kernels from unsupervised images and generate the corresponding Low-Resolution (LR) images.
Based on the domain-consistent LR-HR pairs, we train easy-implemented Convolutional Neural Network (CNN) SR models.
arXiv Detail & Related papers (2020-12-18T08:25:39Z) - Dense Attention Fluid Network for Salient Object Detection in Optical
Remote Sensing Images [193.77450545067967]
We propose an end-to-end Dense Attention Fluid Network (DAFNet) for salient object detection in optical remote sensing images (RSIs)
A Global Context-aware Attention (GCA) module is proposed to adaptively capture long-range semantic context relationships.
We construct a new and challenging optical RSI dataset for SOD that contains 2,000 images with pixel-wise saliency annotations.
arXiv Detail & Related papers (2020-11-26T06:14:10Z) - Fusion of Deep and Non-Deep Methods for Fast Super-Resolution of
Satellite Images [54.44842669325082]
This work proposes to bridge the gap between image quality and the price by improving the image quality via super-resolution (SR)
We design an SR framework that analyzes the regional information content on each patch of the low-resolution image.
We show substantial decrease in inference time while achieving similar performance to that of existing deep SR methods.
arXiv Detail & Related papers (2020-08-03T13:55:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.