Conditional Brownian Bridge Diffusion Model for VHR SAR to Optical Image Translation
- URL: http://arxiv.org/abs/2408.07947v3
- Date: Wed, 11 Sep 2024 04:02:50 GMT
- Title: Conditional Brownian Bridge Diffusion Model for VHR SAR to Optical Image Translation
- Authors: Seon-Hoon Kim, Dae-Won Chung,
- Abstract summary: This paper introduces a conditional image-to-image translation approach based on Brownian Bridge Diffusion Model (BBDM)
We conducted comprehensive experiments on the MSAW dataset, a paired SAR and optical images collection of 0.5m Very-High-Resolution (VHR)
- Score: 5.578820789388206
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Synthetic Aperture Radar (SAR) imaging technology provides the unique advantage of being able to collect data regardless of weather conditions and time. However, SAR images exhibit complex backscatter patterns and speckle noise, which necessitate expertise for interpretation. Research on translating SAR images into optical-like representations has been conducted to aid the interpretation of SAR data. Nevertheless, existing studies have predominantly utilized low-resolution satellite imagery datasets and have largely been based on Generative Adversarial Network (GAN) which are known for their training instability and low fidelity. To overcome these limitations of low-resolution data usage and GAN-based approaches, this paper introduces a conditional image-to-image translation approach based on Brownian Bridge Diffusion Model (BBDM). We conducted comprehensive experiments on the MSAW dataset, a paired SAR and optical images collection of 0.5m Very-High-Resolution (VHR). The experimental results indicate that our method surpasses both the Conditional Diffusion Models (CDMs) and the GAN-based models in diverse perceptual quality metrics.
Related papers
- C-DiffSET: Leveraging Latent Diffusion for SAR-to-EO Image Translation with Confidence-Guided Reliable Object Generation [23.63992950769041]
C-DiffSET is a framework leveraging pretrained Latent Diffusion Model (LDM) extensively trained on natural images.
Remarkably, we find that the pretrained VAE encoder aligns SAR and EO images in the same latent space, even with varying noise levels in SAR inputs.
arXiv Detail & Related papers (2024-11-16T12:28:40Z) - Electrooptical Image Synthesis from SAR Imagery Using Generative Adversarial Networks [0.0]
This research contributes to the field of remote sensing by bridging the gap between SAR and EO imagery.
The results show significant improvements in interpretability, making SAR data more accessible for analysts familiar with EO imagery.
Our research contributes to the field of remote sensing by bridging the gap between SAR and EO imagery, offering a novel tool for enhanced data interpretation.
arXiv Detail & Related papers (2024-09-07T14:31:46Z) - SAR to Optical Image Translation with Color Supervised Diffusion Model [5.234109158596138]
This paper introduces an innovative generative model designed to transform SAR images into more intelligible optical images.
We employ SAR images as conditional guides in the sampling process and integrate color supervision to counteract color shift issues.
arXiv Detail & Related papers (2024-07-24T01:11:28Z) - SAR Image Synthesis with Diffusion Models [0.0]
diffusion models (DMs) have become a popular method for generating synthetic data.
In this work, a specific type of DMs, namely denoising diffusion probabilistic model (DDPM) is adapted to the SAR domain.
We show that DDPM qualitatively and quantitatively outperforms state-of-the-art GAN-based methods for SAR image generation.
arXiv Detail & Related papers (2024-05-13T14:21:18Z) - Diffusion Model Based Visual Compensation Guidance and Visual Difference
Analysis for No-Reference Image Quality Assessment [82.13830107682232]
We propose a novel class of state-of-the-art (SOTA) generative model, which exhibits the capability to model intricate relationships.
We devise a new diffusion restoration network that leverages the produced enhanced image and noise-containing images.
Two visual evaluation branches are designed to comprehensively analyze the obtained high-level feature information.
arXiv Detail & Related papers (2024-02-22T09:39:46Z) - SAR Despeckling via Regional Denoising Diffusion Probabilistic Model [6.154796320245652]
Region Denoising Diffusion Probabilistic Model (R-DDPM) based on generative models.
This paper introduces a novel despeckling approach termed Region Denoising Diffusion Probabilistic Model (R-DDPM) based on generative models.
arXiv Detail & Related papers (2024-01-06T04:34:46Z) - Diffusion Models for Interferometric Satellite Aperture Radar [73.01013149014865]
Probabilistic Diffusion Models (PDMs) have recently emerged as a very promising class of generative models.
Here, we leverage PDMs to generate several radar-based satellite image datasets.
We show that PDMs succeed in generating images with complex and realistic structures, but that sampling time remains an issue.
arXiv Detail & Related papers (2023-08-31T16:26:17Z) - ACDMSR: Accelerated Conditional Diffusion Models for Single Image
Super-Resolution [84.73658185158222]
We propose a diffusion model-based super-resolution method called ACDMSR.
Our method adapts the standard diffusion model to perform super-resolution through a deterministic iterative denoising process.
Our approach generates more visually realistic counterparts for low-resolution images, emphasizing its effectiveness in practical scenarios.
arXiv Detail & Related papers (2023-07-03T06:49:04Z) - OADAT: Experimental and Synthetic Clinical Optoacoustic Data for
Standardized Image Processing [62.993663757843464]
Optoacoustic (OA) imaging is based on excitation of biological tissues with nanosecond-duration laser pulses followed by detection of ultrasound waves generated via light-absorption-mediated thermoelastic expansion.
OA imaging features a powerful combination between rich optical contrast and high resolution in deep tissues.
No standardized datasets generated with different types of experimental set-up and associated processing methods are available to facilitate advances in broader applications of OA in clinical settings.
arXiv Detail & Related papers (2022-06-17T08:11:26Z) - Perception Consistency Ultrasound Image Super-resolution via
Self-supervised CycleGAN [63.49373689654419]
We propose a new perception consistency ultrasound image super-resolution (SR) method based on self-supervision and cycle generative adversarial network (CycleGAN)
We first generate the HR fathers and the LR sons of the test ultrasound LR image through image enhancement.
We then make full use of the cycle loss of LR-SR-LR and HR-LR-SR and the adversarial characteristics of the discriminator to promote the generator to produce better perceptually consistent SR results.
arXiv Detail & Related papers (2020-12-28T08:24:04Z) - Frequency Consistent Adaptation for Real World Super Resolution [64.91914552787668]
We propose a novel Frequency Consistent Adaptation (FCA) that ensures the frequency domain consistency when applying Super-Resolution (SR) methods to the real scene.
We estimate degradation kernels from unsupervised images and generate the corresponding Low-Resolution (LR) images.
Based on the domain-consistent LR-HR pairs, we train easy-implemented Convolutional Neural Network (CNN) SR models.
arXiv Detail & Related papers (2020-12-18T08:25:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.