SAR-W-MixMAE: SAR Foundation Model Training Using Backscatter Power Weighting
- URL: http://arxiv.org/abs/2503.01181v2
- Date: Tue, 04 Mar 2025 05:20:53 GMT
- Title: SAR-W-MixMAE: SAR Foundation Model Training Using Backscatter Power Weighting
- Authors: Ali Caglayan, Nevrez Imamoglu, Toru Kouyama,
- Abstract summary: Foundation model approaches such as masked auto-encoders (MAE) or its variations are now being successfully applied to satellite imagery.<n>Due to difficulty in semantic labeling to create datasets and higher noise content with respect to optical images, Synthetic Aperture Radar (SAR) data has not been explored a lot in the field for foundation models.<n>In this work, we explored masked auto-encoder, specifically MixMAE on Sentinel-1 SAR images and its impact on SAR image classification tasks.
- Score: 3.618534280726541
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Foundation model approaches such as masked auto-encoders (MAE) or its variations are now being successfully applied to satellite imagery. Most of the ongoing technical validation of foundation models have been applied to optical images like RGB or multi-spectral images. Due to difficulty in semantic labeling to create datasets and higher noise content with respect to optical images, Synthetic Aperture Radar (SAR) data has not been explored a lot in the field for foundation models. Therefore, in this work as a pre-training approach, we explored masked auto-encoder, specifically MixMAE on Sentinel-1 SAR images and its impact on SAR image classification tasks. Moreover, we proposed to use the physical characteristic of SAR data for applying weighting parameter on the auto-encoder training loss (MSE) to reduce the effect of speckle noise and very high values on the SAR images. Proposed SAR intensity-based weighting of the reconstruction loss demonstrates promising results both on SAR pre-training and downstream tasks specifically on flood detection compared with the baseline model.
Related papers
- Enhancing SAR Object Detection with Self-Supervised Pre-training on Masked Auto-Encoders [5.234109158596138]
Self-supervised learning (SSL) is proposed to learn feature representations of SAR images during the pre-training process.<n>The proposed method captures proper latent representations of SAR images and improves the model generalization in downstream tasks.
arXiv Detail & Related papers (2025-01-20T03:28:34Z) - Deep Learning Based Speckle Filtering for Polarimetric SAR Images. Application to Sentinel-1 [51.404644401997736]
We propose a complete framework to remove speckle in polarimetric SAR images using a convolutional neural network.
Experiments show that the proposed approach offers exceptional results in both speckle reduction and resolution preservation.
arXiv Detail & Related papers (2024-08-28T10:07:17Z) - Conditional Brownian Bridge Diffusion Model for VHR SAR to Optical Image Translation [5.578820789388206]
This paper introduces a conditional image-to-image translation approach based on Brownian Bridge Diffusion Model (BBDM)
We conducted comprehensive experiments on the MSAW dataset, a paired SAR and optical images collection of 0.5m Very-High-Resolution (VHR)
arXiv Detail & Related papers (2024-08-15T05:43:46Z) - DGNet: Dynamic Gradient-Guided Network for Water-Related Optics Image
Enhancement [77.0360085530701]
Underwater image enhancement (UIE) is a challenging task due to the complex degradation caused by underwater environments.
Previous methods often idealize the degradation process, and neglect the impact of medium noise and object motion on the distribution of image features.
Our approach utilizes predicted images to dynamically update pseudo-labels, adding a dynamic gradient to optimize the network's gradient space.
arXiv Detail & Related papers (2023-12-12T06:07:21Z) - DiAD: A Diffusion-based Framework for Multi-class Anomaly Detection [55.48770333927732]
We propose a Difusion-based Anomaly Detection (DiAD) framework for multi-class anomaly detection.
It consists of a pixel-space autoencoder, a latent-space Semantic-Guided (SG) network with a connection to the stable diffusion's denoising network, and a feature-space pre-trained feature extractor.
Experiments on MVTec-AD and VisA datasets demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2023-12-11T18:38:28Z) - Non-Visible Light Data Synthesis and Application: A Case Study for
Synthetic Aperture Radar Imagery [30.590315753622132]
We explore the "hidden" ability of large-scale pre-trained image generation models, such as Stable Diffusion and Imagen, in non-visible light domains.
We propose a 2-stage low-rank adaptation method, and we call it 2LoRA.
In the first stage, the model is adapted using aerial-view regular image data (whose structure matches SAR), followed by the second stage where the base model from the first stage is further adapted using SAR modality data.
arXiv Detail & Related papers (2023-11-29T09:48:01Z) - Diffusion Models for Interferometric Satellite Aperture Radar [73.01013149014865]
Probabilistic Diffusion Models (PDMs) have recently emerged as a very promising class of generative models.
Here, we leverage PDMs to generate several radar-based satellite image datasets.
We show that PDMs succeed in generating images with complex and realistic structures, but that sampling time remains an issue.
arXiv Detail & Related papers (2023-08-31T16:26:17Z) - SAR Despeckling using a Denoising Diffusion Probabilistic Model [52.25981472415249]
The presence of speckle degrades the image quality and adversely affects the performance of SAR image understanding applications.
We introduce SAR-DDPM, a denoising diffusion probabilistic model for SAR despeckling.
The proposed method achieves significant improvements in both quantitative and qualitative results over the state-of-the-art despeckling methods.
arXiv Detail & Related papers (2022-06-09T14:00:26Z) - Transformer-based SAR Image Despeckling [53.99620005035804]
We introduce a transformer-based network for SAR image despeckling.
The proposed despeckling network comprises of a transformer-based encoder which allows the network to learn global dependencies between different image regions.
Experiments show that the proposed method achieves significant improvements over traditional and convolutional neural network-based despeckling methods.
arXiv Detail & Related papers (2022-01-23T20:09:01Z) - Speckle2Void: Deep Self-Supervised SAR Despeckling with Blind-Spot
Convolutional Neural Networks [30.410981386006394]
despeckling is a crucial preliminary step in scene analysis algorithms.
Recent success of deep learning envisions a new generation of despeckling techniques.
We propose a self-supervised Bayesian despeckling method.
arXiv Detail & Related papers (2020-07-04T11:38:48Z) - SAR2SAR: a semi-supervised despeckling algorithm for SAR images [3.9490074068698]
Deep learning algorithm with self-supervision is proposed in this paper: SAR2SAR.
The strategy to adapt it to SAR despeckling is presented, based on a compensation of temporal changes and a loss function adapted to the statistics of speckle.
Results on real images are discussed, to show the potential of the proposed algorithm.
arXiv Detail & Related papers (2020-06-26T15:07:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.