SatFlow: Generative model based framework for producing High Resolution Gap Free Remote Sensing Imagery
- URL: http://arxiv.org/abs/2502.01098v1
- Date: Mon, 03 Feb 2025 06:40:13 GMT
- Title: SatFlow: Generative model based framework for producing High Resolution Gap Free Remote Sensing Imagery
- Authors: Bharath Irigireddy, Varaprasad Bandaru,
- Abstract summary: We present SatFlow, a generative model-based framework that fuses low-resolution MODIS imagery and Landsat observations to produce frequent, high-resolution, gap-free surface reflectance imagery.
Our model, trained via Conditional Flow Matching, demonstrates better performance in generating imagery with preserved structural and spectral integrity.
This capability is crucial for downstream applications such as crop phenology tracking, environmental change detection etc.
- Score: 0.0
- License:
- Abstract: Frequent, high-resolution remote sensing imagery is crucial for agricultural and environmental monitoring. Satellites from the Landsat collection offer detailed imagery at 30m resolution but with lower temporal frequency, whereas missions like MODIS and VIIRS provide daily coverage at coarser resolutions. Clouds and cloud shadows contaminate about 55\% of the optical remote sensing observations, posing additional challenges. To address these challenges, we present SatFlow, a generative model-based framework that fuses low-resolution MODIS imagery and Landsat observations to produce frequent, high-resolution, gap-free surface reflectance imagery. Our model, trained via Conditional Flow Matching, demonstrates better performance in generating imagery with preserved structural and spectral integrity. Cloud imputation is treated as an image inpainting task, where the model reconstructs cloud-contaminated pixels and fills gaps caused by scan lines during inference by leveraging the learned generative processes. Experimental results demonstrate the capability of our approach in reliably imputing cloud-covered regions. This capability is crucial for downstream applications such as crop phenology tracking, environmental change detection etc.,
Related papers
- SatDiffMoE: A Mixture of Estimation Method for Satellite Image Super-resolution with Latent Diffusion Models [3.839322642354617]
We propose a novel diffusion-based fusion algorithm called textbfSatDiffMoE.
Our algorithm is highly flexible and allows training and inference on arbitrary number of low-resolution images.
Experimental results show that our proposed SatDiffMoE method achieves superior performance for the satellite image super-resolution tasks.
arXiv Detail & Related papers (2024-06-14T17:58:28Z) - IDF-CR: Iterative Diffusion Process for Divide-and-Conquer Cloud Removal in Remote-sensing Images [55.40601468843028]
We present an iterative diffusion process for cloud removal (IDF-CR)
IDF-CR is divided into two-stage models that address pixel space and latent space.
In the latent space stage, the diffusion model transforms low-quality cloud removal into high-quality clean output.
arXiv Detail & Related papers (2024-03-18T15:23:48Z) - Diffusion Enhancement for Cloud Removal in Ultra-Resolution Remote
Sensing Imagery [48.14610248492785]
Cloud layers severely compromise the quality and effectiveness of optical remote sensing (RS) images.
Existing deep-learning (DL)-based Cloud Removal (CR) techniques encounter difficulties in accurately reconstructing the original visual authenticity and detailed semantic content of the images.
This work proposes enhancements at the data and methodology fronts to tackle this challenge.
arXiv Detail & Related papers (2024-01-25T13:14:17Z) - Towards High-quality HDR Deghosting with Conditional Diffusion Models [88.83729417524823]
High Dynamic Range (LDR) images can be recovered from several Low Dynamic Range (LDR) images by existing Deep Neural Networks (DNNs) techniques.
DNNs still generate ghosting artifacts when LDR images have saturation and large motion.
We formulate the HDR deghosting problem as an image generation that leverages LDR features as the diffusion model's condition.
arXiv Detail & Related papers (2023-11-02T01:53:55Z) - DiffCR: A Fast Conditional Diffusion Framework for Cloud Removal from
Optical Satellite Images [27.02507384522271]
This paper presents a novel framework called DiffCR, which leverages conditional guided diffusion with deep convolutional networks for high-performance cloud removal for optical satellite imagery.
We introduce a decoupled encoder for conditional image feature extraction, providing a robust color representation to ensure the close similarity of appearance information between the conditional input and the synthesized output.
arXiv Detail & Related papers (2023-08-08T17:34:28Z) - ScatterNeRF: Seeing Through Fog with Physically-Based Inverse Neural
Rendering [83.75284107397003]
We introduce ScatterNeRF, a neural rendering method which renders scenes and decomposes the fog-free background.
We propose a disentangled representation for the scattering volume and the scene objects, and learn the scene reconstruction with physics-inspired losses.
We validate our method by capturing multi-view In-the-Wild data and controlled captures in a large-scale fog chamber.
arXiv Detail & Related papers (2023-05-03T13:24:06Z) - Boosting Point Clouds Rendering via Radiance Mapping [49.24193509772339]
We focus on boosting the image quality of point clouds rendering with a compact model design.
We simplify the NeRF representation to a spatial mapping function which only requires single evaluation per pixel.
Our method achieves the state-of-the-art rendering on point clouds, outperforming prior works by notable margins.
arXiv Detail & Related papers (2022-10-27T01:25:57Z) - Cloud removal Using Atmosphere Model [7.259230333873744]
Cloud removal is an essential task in remote sensing data analysis.
We propose to use scattering model for temporal sequence of images of any scene in the framework of low rank and sparse models.
We develop a semi-realistic simulation method to produce cloud cover so that various methods can be quantitatively analysed.
arXiv Detail & Related papers (2022-10-05T01:29:19Z) - Spatial-Temporal Super-Resolution of Satellite Imagery via Conditional
Pixel Synthesis [66.50914391487747]
We propose a new conditional pixel synthesis model that uses abundant, low-cost, low-resolution imagery to generate accurate high-resolution imagery.
We show that our model attains photo-realistic sample quality and outperforms competing baselines on a key downstream task -- object counting.
arXiv Detail & Related papers (2021-06-22T02:16:24Z) - Seeing Through Clouds in Satellite Images [14.84582204034532]
This paper presents a neural-network-based solution to recover pixels occluded by clouds in satellite images.
We leverage radio frequency (RF) signals in the ultra/super-high frequency band that penetrate clouds to help reconstruct the occluded regions in multispectral images.
arXiv Detail & Related papers (2021-06-15T20:01:27Z) - Predicting Landsat Reflectance with Deep Generative Fusion [2.867517731896504]
Public satellite missions are commonly bound to a trade-off between spatial and temporal resolution.
This hinders their potential to assist vegetation monitoring or humanitarian actions.
We probe the potential of deep generative models to produce high-resolution optical imagery by fusing products with different spatial and temporal characteristics.
arXiv Detail & Related papers (2020-11-09T21:06:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.