Recognition of polar lows in Sentinel-1 SAR images with deep learning
- URL: http://arxiv.org/abs/2203.16401v1
- Date: Wed, 30 Mar 2022 15:32:39 GMT
- Title: Recognition of polar lows in Sentinel-1 SAR images with deep learning
- Authors: Jakob Grahn, Filippo Maria Bianchi
- Abstract summary: We introduce a novel dataset consisting of Sentinel-1 images labeled as positive; representing a maritime mesocyclone, or negative; representing a normal sea state.
The dataset is used to train a deep learning model to classify the labeled images.
The model yields an F-1 score of 0.95, indicating that polar lows can be consistently detected from SAR images.
- Score: 5.571369922847262
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we explore the possibility of detecting polar lows in C-band
SAR images by means of deep learning. Specifically, we introduce a novel
dataset consisting of Sentinel-1 images labeled as positive; representing a
maritime mesocyclone, or negative; representing a normal sea state. The dataset
is constructed using the ERA5 dataset as baseline and it consists of 2004
annotated images. To our knowledge, this is the first dataset of its kind to be
publicly released. The dataset is used to train a deep learning model to
classify the labeled images. Evaluated on an independent test set, the model
yields an F-1 score of 0.95, indicating that polar lows can be consistently
detected from SAR images. Interpretability techniques applied to the deep
learning model reveal that atmospheric fronts and cyclonic eyes are key
features in the classification. Moreover, experimental results show that the
model is accurate even if: (i) such features are significantly cropped due to
the limited swath width of the SAR, (ii) the features are partly covered by sea
ice and (iii) land is covering significant parts of the images. By evaluating
the model performance on multiple input image resolutions (pixel sizes of 500m,
1km and 2km), it is found that higher resolution yield the best performance.
This emphasises the potential of using high resolution sensors like SAR for
detecting polar lows, as compared to conventionally used sensors such as
scatterometers.
Related papers
- Deep Learning Based Speckle Filtering for Polarimetric SAR Images. Application to Sentinel-1 [51.404644401997736]
We propose a complete framework to remove speckle in polarimetric SAR images using a convolutional neural network.
Experiments show that the proposed approach offers exceptional results in both speckle reduction and resolution preservation.
arXiv Detail & Related papers (2024-08-28T10:07:17Z) - Robust Depth Enhancement via Polarization Prompt Fusion Tuning [112.88371907047396]
We present a framework that leverages polarization imaging to improve inaccurate depth measurements from various depth sensors.
Our method first adopts a learning-based strategy where a neural network is trained to estimate a dense and complete depth map from polarization data and a sensor depth map from different sensors.
To further improve the performance, we propose a Polarization Prompt Fusion Tuning (PPFT) strategy to effectively utilize RGB-based models pre-trained on large-scale datasets.
arXiv Detail & Related papers (2024-04-05T17:55:33Z) - SARDet-100K: Towards Open-Source Benchmark and ToolKit for Large-Scale SAR Object Detection [79.23689506129733]
We establish a new benchmark dataset and an open-source method for large-scale SAR object detection.
Our dataset, SARDet-100K, is a result of intense surveying, collecting, and standardizing 10 existing SAR detection datasets.
To the best of our knowledge, SARDet-100K is the first COCO-level large-scale multi-class SAR object detection dataset ever created.
arXiv Detail & Related papers (2024-03-11T09:20:40Z) - SDF2Net: Shallow to Deep Feature Fusion Network for PolSAR Image
Classification [1.2349871196144497]
Convolutional neural networks (CNNs) play a crucial role in capturing PolSAR image characteristics.
In this study, a novel three-branch fusion of complex-valued CNN, named the Shallow to Deep Feature Fusion Network (SDF2Net), is proposed for PolSAR image classification.
The results indicate that the proposed approach demonstrates improvements in overallaccuracy, with a 1.3% and 0.8% enhancement for the AIRSAR datasets and a 0.5% improvement for the ESAR dataset.
arXiv Detail & Related papers (2024-02-27T16:46:21Z) - Learning Heavily-Degraded Prior for Underwater Object Detection [59.5084433933765]
This paper seeks transferable prior knowledge from detector-friendly images.
It is based on statistical observations that, the heavily degraded regions of detector-friendly (DFUI) and underwater images have evident feature distribution gaps.
Our method with higher speeds and less parameters still performs better than transformer-based detectors.
arXiv Detail & Related papers (2023-08-24T12:32:46Z) - A Semi-supervised Object Detection Algorithm for Underwater Imagery [10.017195276758455]
We propose to treat artificial objects as anomalies and detect them through a semi-supervised framework based on Variational Autoencoders (VAEs)
We develop a method which clusters image data in a learned low-dimensional latent space and extracts images that are likely to contain anomalous features.
We demonstrate that by applying both methods on large image datasets, human operators can be shown candidate anomalous samples with a low false positive rate to identify objects of interest.
arXiv Detail & Related papers (2023-06-07T23:40:04Z) - A Large Scale Homography Benchmark [52.55694707744518]
We present a large-scale dataset of Planes in 3D, Pi3D, of roughly 1000 planes observed in 10 000 images from the 1DSfM dataset.
We also present HEB, a large-scale homography estimation benchmark leveraging Pi3D.
arXiv Detail & Related papers (2023-02-20T14:18:09Z) - A Dataset with Multibeam Forward-Looking Sonar for Underwater Object
Detection [0.0]
Multibeam forward-looking sonar (MFLS) plays an important role in underwater detection.
There are several challenges to the research on underwater object detection with MFLS.
We present a novel dataset, consisting of over 9000 MFLS images captured using Tritech Gemini 1200ik sonar.
arXiv Detail & Related papers (2022-12-01T08:26:03Z) - Image-to-Height Domain Translation for Synthetic Aperture Sonar [3.2662392450935416]
In this work, we focus on collection geometry with respect to isotropic and anisotropic textures.
The low grazing angle of the collection geometry, combined with orientation of the sonar path relative to anisotropic texture, poses a significant challenge for image-alignment and other multi-view scene understanding frameworks.
arXiv Detail & Related papers (2021-12-12T19:53:14Z) - Deep-Learning-Based Single-Image Height Reconstruction from
Very-High-Resolution SAR Intensity Data [1.7894377200944511]
We present the first-ever demonstration of deep learning-based single image height prediction for the other important sensor modality in remote sensing: synthetic aperture radar (SAR) data.
Besides the adaptation of a convolutional neural network (CNN) architecture for SAR intensity images, we present a workflow for the generation of training data.
Since we put a particular emphasis on transferability, we are able to confirm that deep learning-based single-image height estimation is not only possible, but also transfers quite well to unseen data.
arXiv Detail & Related papers (2021-11-03T08:20:03Z) - Depth Estimation from Monocular Images and Sparse Radar Data [93.70524512061318]
In this paper, we explore the possibility of achieving a more accurate depth estimation by fusing monocular images and Radar points using a deep neural network.
We find that the noise existing in Radar measurements is one of the main key reasons that prevents one from applying the existing fusion methods.
The experiments are conducted on the nuScenes dataset, which is one of the first datasets which features Camera, Radar, and LiDAR recordings in diverse scenes and weather conditions.
arXiv Detail & Related papers (2020-09-30T19:01:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.