The QXS-SAROPT Dataset for Deep Learning in SAR-Optical Data Fusion
- URL: http://arxiv.org/abs/2103.08259v1
- Date: Mon, 15 Mar 2021 10:22:46 GMT
- Title: The QXS-SAROPT Dataset for Deep Learning in SAR-Optical Data Fusion
- Authors: Meiyu Huang, Yao Xu, Lixin Qian, Weili Shi, Yaqin Zhang, Wei Bao, Nan
Wang, Xuejiao Liu, Xueshuang Xiang
- Abstract summary: We publish the QXS-SAROPT dataset to foster deep learning research in SAR-optical data fusion.
We show exemplary results for two representative applications, namely SAR-optical image matching and SAR ship detection boosted by cross-modal information from optical images.
- Score: 14.45289690639374
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning techniques have made an increasing impact on the field of
remote sensing. However, deep neural networks based fusion of multimodal data
from different remote sensors with heterogenous characteristics has not been
fully explored, due to the lack of availability of big amounts of perfectly
aligned multi-sensor image data with diverse scenes of high resolution,
especially for synthetic aperture radar (SAR) data and optical imagery. In this
paper, we publish the QXS-SAROPT dataset to foster deep learning research in
SAR-optical data fusion. QXS-SAROPT comprises 20,000 pairs of corresponding
image patches, collected from three port cities: San Diego, Shanghai and
Qingdao acquired by the SAR satellite GaoFen-3 and optical satellites of Google
Earth. Besides a detailed description of the dataset, we show exemplary results
for two representative applications, namely SAR-optical image matching and SAR
ship detection boosted by cross-modal information from optical images. Since
QXS-SAROPT is a large open dataset with multiple scenes of the highest
resolution of this kind, we believe it will support further developments in the
field of deep learning based SAR-optical data fusion for remote sensing.
Related papers
- Text-Guided Coarse-to-Fine Fusion Network for Robust Remote Sensing Visual Question Answering [26.8129265632403]
Current Remote Sensing Visual Question Answering (RSVQA) methods are limited by the imaging mechanisms of optical sensors.
We propose a Text-guided Coarse-to-Fine Fusion Network (TGFNet) to improve RSVQA performance.
We create the first large-scale benchmark dataset for evaluating optical-SAR RSVQA methods.
arXiv Detail & Related papers (2024-11-24T09:48:03Z) - Electrooptical Image Synthesis from SAR Imagery Using Generative Adversarial Networks [0.0]
This research contributes to the field of remote sensing by bridging the gap between SAR and EO imagery.
The results show significant improvements in interpretability, making SAR data more accessible for analysts familiar with EO imagery.
Our research contributes to the field of remote sensing by bridging the gap between SAR and EO imagery, offering a novel tool for enhanced data interpretation.
arXiv Detail & Related papers (2024-09-07T14:31:46Z) - 3MOS: Multi-sources, Multi-resolutions, and Multi-scenes dataset for Optical-SAR image matching [6.13702551312774]
We introduce a large-scale Multi-sources,Multi-resolutions, and Multi-scenes dataset for Optical-SAR image matching (3MOS)
It consists of 155K optical-SAR image pairs, including SAR data from six commercial satellites, with resolutions ranging from 1.25m to 12.5m.
The data has been classified into eight scenes including urban, rural, plains, hills, mountains, water, desert, and frozen earth.
arXiv Detail & Related papers (2024-04-01T00:31:11Z) - SARDet-100K: Towards Open-Source Benchmark and ToolKit for Large-Scale SAR Object Detection [79.23689506129733]
We establish a new benchmark dataset and an open-source method for large-scale SAR object detection.
Our dataset, SARDet-100K, is a result of intense surveying, collecting, and standardizing 10 existing SAR detection datasets.
To the best of our knowledge, SARDet-100K is the first COCO-level large-scale multi-class SAR object detection dataset ever created.
arXiv Detail & Related papers (2024-03-11T09:20:40Z) - Diffusion Models for Interferometric Satellite Aperture Radar [73.01013149014865]
Probabilistic Diffusion Models (PDMs) have recently emerged as a very promising class of generative models.
Here, we leverage PDMs to generate several radar-based satellite image datasets.
We show that PDMs succeed in generating images with complex and realistic structures, but that sampling time remains an issue.
arXiv Detail & Related papers (2023-08-31T16:26:17Z) - Object Detection in Hyperspectral Image via Unified Spectral-Spatial
Feature Aggregation [55.9217962930169]
We present S2ADet, an object detector that harnesses the rich spectral and spatial complementary information inherent in hyperspectral images.
S2ADet surpasses existing state-of-the-art methods, achieving robust and reliable results.
arXiv Detail & Related papers (2023-06-14T09:01:50Z) - xView3-SAR: Detecting Dark Fishing Activity Using Synthetic Aperture
Radar Imagery [52.67592123500567]
Unsustainable fishing practices worldwide pose a major threat to marine resources and ecosystems.
It is now possible to automate detection of dark vessels day or night, under all-weather conditions.
xView3-SAR consists of nearly 1,000 analysis-ready SAR images from the Sentinel-1 mission.
arXiv Detail & Related papers (2022-06-02T06:53:45Z) - Deep-Learning-Based Single-Image Height Reconstruction from
Very-High-Resolution SAR Intensity Data [1.7894377200944511]
We present the first-ever demonstration of deep learning-based single image height prediction for the other important sensor modality in remote sensing: synthetic aperture radar (SAR) data.
Besides the adaptation of a convolutional neural network (CNN) architecture for SAR intensity images, we present a workflow for the generation of training data.
Since we put a particular emphasis on transferability, we are able to confirm that deep learning-based single-image height estimation is not only possible, but also transfers quite well to unseen data.
arXiv Detail & Related papers (2021-11-03T08:20:03Z) - Depth Estimation from Monocular Images and Sparse Radar Data [93.70524512061318]
In this paper, we explore the possibility of achieving a more accurate depth estimation by fusing monocular images and Radar points using a deep neural network.
We find that the noise existing in Radar measurements is one of the main key reasons that prevents one from applying the existing fusion methods.
The experiments are conducted on the nuScenes dataset, which is one of the first datasets which features Camera, Radar, and LiDAR recordings in diverse scenes and weather conditions.
arXiv Detail & Related papers (2020-09-30T19:01:33Z) - X-ModalNet: A Semi-Supervised Deep Cross-Modal Network for
Classification of Remote Sensing Data [69.37597254841052]
We propose a novel cross-modal deep-learning framework called X-ModalNet.
X-ModalNet generalizes well, owing to propagating labels on an updatable graph constructed by high-level features on the top of the network.
We evaluate X-ModalNet on two multi-modal remote sensing datasets (HSI-MSI and HSI-SAR) and achieve a significant improvement in comparison with several state-of-the-art methods.
arXiv Detail & Related papers (2020-06-24T15:29:41Z) - SpaceNet 6: Multi-Sensor All Weather Mapping Dataset [13.715388432549373]
We present an open Multi-Sensor All Weather Mapping (MSAW) dataset and challenge.
MSAW covers 120 km2 over multiple overlapping collects and is annotated with over 48,000 unique building footprints labels.
We present a baseline and benchmark for building footprint extraction with SAR data and find that state-of-the-art segmentation models pre-trained on optical data, and then trained on SAR.
arXiv Detail & Related papers (2020-04-14T13:43:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.