Enhancing SAR Object Detection with Self-Supervised Pre-training on Masked Auto-Encoders
- URL: http://arxiv.org/abs/2501.11249v1
- Date: Mon, 20 Jan 2025 03:28:34 GMT
- Title: Enhancing SAR Object Detection with Self-Supervised Pre-training on Masked Auto-Encoders
- Authors: Xinyang Pu, Feng Xu,
- Abstract summary: Self-supervised learning (SSL) is proposed to learn feature representations of SAR images during the pre-training process.<n>The proposed method captures proper latent representations of SAR images and improves the model generalization in downstream tasks.
- Score: 5.234109158596138
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Supervised fine-tuning methods (SFT) perform great efficiency on artificial intelligence interpretation in SAR images, leveraging the powerful representation knowledge from pre-training models. Due to the lack of domain-specific pre-trained backbones in SAR images, the traditional strategies are loading the foundation pre-train models of natural scenes such as ImageNet, whose characteristics of images are extremely different from SAR images. This may hinder the model performance on downstream tasks when adopting SFT on small-scale annotated SAR data. In this paper, an self-supervised learning (SSL) method of masked image modeling based on Masked Auto-Encoders (MAE) is proposed to learn feature representations of SAR images during the pre-training process and benefit the object detection task in SAR images of SFT. The evaluation experiments on the large-scale SAR object detection benchmark named SARDet-100k verify that the proposed method captures proper latent representations of SAR images and improves the model generalization in downstream tasks by converting the pre-trained domain from natural scenes to SAR images through SSL. The proposed method achieves an improvement of 1.3 mAP on the SARDet-100k benchmark compared to only the SFT strategies.
Related papers
- SAR-W-MixMAE: SAR Foundation Model Training Using Backscatter Power Weighting [3.618534280726541]
Foundation model approaches such as masked auto-encoders (MAE) or its variations are now being successfully applied to satellite imagery.
Due to difficulty in semantic labeling to create datasets and higher noise content with respect to optical images, Synthetic Aperture Radar (SAR) data has not been explored a lot in the field for foundation models.
In this work, we explored masked auto-encoder, specifically MixMAE on Sentinel-1 SAR images and its impact on SAR image classification tasks.
arXiv Detail & Related papers (2025-03-03T05:09:44Z) - Deep Learning Based Speckle Filtering for Polarimetric SAR Images. Application to Sentinel-1 [51.404644401997736]
We propose a complete framework to remove speckle in polarimetric SAR images using a convolutional neural network.
Experiments show that the proposed approach offers exceptional results in both speckle reduction and resolution preservation.
arXiv Detail & Related papers (2024-08-28T10:07:17Z) - One-step Generative Diffusion for Realistic Extreme Image Rescaling [47.89362819768323]
We propose a novel framework called One-Step Image Rescaling Diffusion (OSIRDiff) for extreme image rescaling.
OSIRDiff performs rescaling operations in the latent space of a pre-trained autoencoder.
It effectively leverages powerful natural image priors learned by a pre-trained text-to-image diffusion model.
arXiv Detail & Related papers (2024-08-17T09:51:42Z) - X-Fake: Juggling Utility Evaluation and Explanation of Simulated SAR Images [49.546627070454456]
The distribution inconsistency between real and simulated data is the main obstacle that influences the utility of simulated SAR images.
We propose a novel trustworthy utility evaluation framework with a counterfactual explanation for simulated SAR images for the first time, denoted as X-Fake.
The proposed framework is validated on four simulated SAR image datasets obtained from electromagnetic models and generative artificial intelligence approaches.
arXiv Detail & Related papers (2024-07-28T09:27:53Z) - SAFE: a SAR Feature Extractor based on self-supervised learning and masked Siamese ViTs [5.961207817077044]
We propose a novel self-supervised learning framework based on masked Siamese Vision Transformers to create a General SAR Feature Extractor coined SAFE.
Our method leverages contrastive learning principles to train a model on unlabeled SAR data, extracting robust and generalizable features.
We introduce tailored data augmentation techniques specific to SAR imagery, such as sub-aperture decomposition and despeckling.
Our network competes with or surpasses other state-of-the-art methods in few-shot classification and segmentation tasks, even without being trained on the sensors used for the evaluation.
arXiv Detail & Related papers (2024-06-30T23:11:20Z) - Efficient Visual State Space Model for Image Deblurring [83.57239834238035]
Convolutional neural networks (CNNs) and Vision Transformers (ViTs) have achieved excellent performance in image restoration.
We propose a simple yet effective visual state space model (EVSSM) for image deblurring.
arXiv Detail & Related papers (2024-05-23T09:13:36Z) - On quantifying and improving realism of images generated with diffusion [50.37578424163951]
We propose a metric, called Image Realism Score (IRS), computed from five statistical measures of a given image.
IRS is easily usable as a measure to classify a given image as real or fake.
We experimentally establish the model- and data-agnostic nature of the proposed IRS by successfully detecting fake images generated by Stable Diffusion Model (SDM), Dalle2, Midjourney and BigGAN.
Our efforts have also led to Gen-100 dataset, which provides 1,000 samples for 100 classes generated by four high-quality models.
arXiv Detail & Related papers (2023-09-26T08:32:55Z) - Improved Difference Images for Change Detection Classifiers in SAR
Imagery Using Deep Learning [0.0]
This paper proposes a new method of improving SAR image processing to produce higher quality difference images for the classification algorithms.
The method is built on a neural network-based mapping transformation function that produces artificial SAR images from a location in the requested acquisition conditions.
arXiv Detail & Related papers (2023-03-31T06:57:34Z) - Transformer-based SAR Image Despeckling [53.99620005035804]
We introduce a transformer-based network for SAR image despeckling.
The proposed despeckling network comprises of a transformer-based encoder which allows the network to learn global dependencies between different image regions.
Experiments show that the proposed method achieves significant improvements over traditional and convolutional neural network-based despeckling methods.
arXiv Detail & Related papers (2022-01-23T20:09:01Z) - Contrastive Multiview Coding with Electro-optics for SAR Semantic
Segmentation [0.6445605125467573]
We propose multi-modal representation learning for SAR semantic segmentation.
Unlike previous studies, our method jointly uses EO imagery, SAR imagery, and a label mask.
Several experiments show that our approach is superior to the existing methods in model performance, sample efficiency, and convergence speed.
arXiv Detail & Related papers (2021-08-31T23:55:41Z) - A New Mask R-CNN Based Method for Improved Landslide Detection [54.7905160534631]
This paper presents a novel method of landslide detection by exploiting the Mask R-CNN capability of identifying an object layout.
A data set of 160 elements is created containing landslide and non-landslide images.
The proposed algorithm can be potentially useful for land use planners and policy makers of hilly areas.
arXiv Detail & Related papers (2020-10-04T07:46:37Z) - Speckle2Void: Deep Self-Supervised SAR Despeckling with Blind-Spot
Convolutional Neural Networks [30.410981386006394]
despeckling is a crucial preliminary step in scene analysis algorithms.
Recent success of deep learning envisions a new generation of despeckling techniques.
We propose a self-supervised Bayesian despeckling method.
arXiv Detail & Related papers (2020-07-04T11:38:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.