Sea ice detection using concurrent multispectral and synthetic aperture
radar imagery
- URL: http://arxiv.org/abs/2401.06009v1
- Date: Thu, 11 Jan 2024 16:14:30 GMT
- Title: Sea ice detection using concurrent multispectral and synthetic aperture
radar imagery
- Authors: Martin S J Rogers, Maria Fox, Andrew Fleming, Louisa van Zeeland,
Jeremy Wilkinson, and J. Scott Hosking
- Abstract summary: This paper proposes a new tool trained using concurrent multispectral Visible and SAR imagery for sea Ice Detection (ViSual_IceD)
ViSual_IceD is a convolution neural network (CNN) that builds on the classic U-Net architecture by containing two parallel encoders.
As the spatial-temporal coverage of MSI and SAR imagery continues to increase, ViSual_IceD provides a new opportunity for accurate sea ice coverage detection in polar regions.
- Score: 1.0400484498567675
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Synthetic Aperture Radar (SAR) imagery is the primary data type used for sea
ice mapping due to its spatio-temporal coverage and the ability to detect sea
ice independent of cloud and lighting conditions. Automatic sea ice detection
using SAR imagery remains problematic due to the presence of ambiguous signal
and noise within the image. Conversely, ice and water are easily
distinguishable using multispectral imagery (MSI), but in the polar regions the
ocean's surface is often occluded by cloud or the sun may not appear above the
horizon for many months. To address some of these limitations, this paper
proposes a new tool trained using concurrent multispectral Visible and SAR
imagery for sea Ice Detection (ViSual\_IceD). ViSual\_IceD is a convolution
neural network (CNN) that builds on the classic U-Net architecture by
containing two parallel encoder stages, enabling the fusion and concatenation
of MSI and SAR imagery containing different spatial resolutions. The
performance of ViSual\_IceD is compared with U-Net models trained using
concatenated MSI and SAR imagery as well as models trained exclusively on MSI
or SAR imagery. ViSual\_IceD outperforms the other networks, with a F1 score
1.60\% points higher than the next best network, and results indicate that
ViSual\_IceD is selective in the image type it uses during image segmentation.
Outputs from ViSual\_IceD are compared to sea ice concentration products
derived from the AMSR2 Passive Microwave (PMW) sensor. Results highlight how
ViSual\_IceD is a useful tool to use in conjunction with PMW data, particularly
in coastal regions. As the spatial-temporal coverage of MSI and SAR imagery
continues to increase, ViSual\_IceD provides a new opportunity for robust,
accurate sea ice coverage detection in polar regions.
Related papers
- Deep Multimodal Fusion for Semantic Segmentation of Remote Sensing Earth Observation Data [0.08192907805418582]
This paper proposes a late fusion deep learning model (LF-DLM) for semantic segmentation.
One branch integrates detailed textures from aerial imagery captured by UNetFormer with a Multi-Axis Vision Transformer (ViT) backbone.
The other branch captures complex-temporal dynamics from the Sentinel-2 satellite imageMax time series using a U-ViNet with Temporal Attention (U-TAE)
arXiv Detail & Related papers (2024-10-01T07:50:37Z) - Electrooptical Image Synthesis from SAR Imagery Using Generative Adversarial Networks [0.0]
This research contributes to the field of remote sensing by bridging the gap between SAR and EO imagery.
The results show significant improvements in interpretability, making SAR data more accessible for analysts familiar with EO imagery.
Our research contributes to the field of remote sensing by bridging the gap between SAR and EO imagery, offering a novel tool for enhanced data interpretation.
arXiv Detail & Related papers (2024-09-07T14:31:46Z) - Real-Time Multi-Scene Visibility Enhancement for Promoting Navigational Safety of Vessels Under Complex Weather Conditions [48.529493393948435]
The visible-light camera has emerged as an essential imaging sensor for marine surface vessels in intelligent waterborne transportation systems.
The visual imaging quality inevitably suffers from several kinds of degradations under complex weather conditions.
We develop a general-purpose multi-scene visibility enhancement method to restore degraded images captured under different weather conditions.
arXiv Detail & Related papers (2024-09-02T23:46:27Z) - Diffusion Models for Interferometric Satellite Aperture Radar [73.01013149014865]
Probabilistic Diffusion Models (PDMs) have recently emerged as a very promising class of generative models.
Here, we leverage PDMs to generate several radar-based satellite image datasets.
We show that PDMs succeed in generating images with complex and realistic structures, but that sampling time remains an issue.
arXiv Detail & Related papers (2023-08-31T16:26:17Z) - Learning Heavily-Degraded Prior for Underwater Object Detection [59.5084433933765]
This paper seeks transferable prior knowledge from detector-friendly images.
It is based on statistical observations that, the heavily degraded regions of detector-friendly (DFUI) and underwater images have evident feature distribution gaps.
Our method with higher speeds and less parameters still performs better than transformer-based detectors.
arXiv Detail & Related papers (2023-08-24T12:32:46Z) - SAR-ShipNet: SAR-Ship Detection Neural Network via Bidirectional
Coordinate Attention and Multi-resolution Feature Fusion [7.323279438948967]
This paper studies a practically meaningful ship detection problem from synthetic aperture radar (SAR) images by the neural network.
We propose a SAR-ship detection neural network (call SAR-ShipNet for short), by newly developing Bidirectional Coordinate Attention (BCA) and Multi-resolution Feature Fusion (MRF) based on CenterNet.
Experimental results on the public SAR-Ship dataset show that our SAR-ShipNet achieves competitive advantages in both speed and accuracy.
arXiv Detail & Related papers (2022-03-29T12:27:04Z) - Learning a Sensor-invariant Embedding of Satellite Data: A Case Study
for Lake Ice Monitoring [19.72060218456938]
We learn a joint, sensor-invariant embedding within a deep neural network.
Our application problem is the monitoring of lake ice on Alpine lakes.
By fusing satellite data, we map lake ice at a temporal resolution of 1.5 days.
arXiv Detail & Related papers (2021-07-19T18:11:55Z) - Sparse Auxiliary Networks for Unified Monocular Depth Prediction and
Completion [56.85837052421469]
Estimating scene geometry from data obtained with cost-effective sensors is key for robots and self-driving cars.
In this paper, we study the problem of predicting dense depth from a single RGB image with optional sparse measurements from low-cost active depth sensors.
We introduce Sparse Networks (SANs), a new module enabling monodepth networks to perform both the tasks of depth prediction and completion.
arXiv Detail & Related papers (2021-03-30T21:22:26Z) - Visualization of Deep Transfer Learning In SAR Imagery [0.0]
We consider transfer learning to leverage deep features from a network trained on an EO ships dataset.
By exploring the network activations in the form of class-activation maps, we gain insight on how a deep network interprets a new modality.
arXiv Detail & Related papers (2021-03-20T00:16:15Z) - Dense Attention Fluid Network for Salient Object Detection in Optical
Remote Sensing Images [193.77450545067967]
We propose an end-to-end Dense Attention Fluid Network (DAFNet) for salient object detection in optical remote sensing images (RSIs)
A Global Context-aware Attention (GCA) module is proposed to adaptively capture long-range semantic context relationships.
We construct a new and challenging optical RSI dataset for SOD that contains 2,000 images with pixel-wise saliency annotations.
arXiv Detail & Related papers (2020-11-26T06:14:10Z) - Depth Estimation from Monocular Images and Sparse Radar Data [93.70524512061318]
In this paper, we explore the possibility of achieving a more accurate depth estimation by fusing monocular images and Radar points using a deep neural network.
We find that the noise existing in Radar measurements is one of the main key reasons that prevents one from applying the existing fusion methods.
The experiments are conducted on the nuScenes dataset, which is one of the first datasets which features Camera, Radar, and LiDAR recordings in diverse scenes and weather conditions.
arXiv Detail & Related papers (2020-09-30T19:01:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.