SDF2Net: Shallow to Deep Feature Fusion Network for PolSAR Image
Classification
- URL: http://arxiv.org/abs/2402.17672v1
- Date: Tue, 27 Feb 2024 16:46:21 GMT
- Title: SDF2Net: Shallow to Deep Feature Fusion Network for PolSAR Image
Classification
- Authors: Mohammed Q. Alkhatib, M. Sami Zitouni, Mina Al-Saad, Nour Aburaed, and
Hussain Al-Ahmad
- Abstract summary: Convolutional neural networks (CNNs) play a crucial role in capturing PolSAR image characteristics.
In this study, a novel three-branch fusion of complex-valued CNN, named the Shallow to Deep Feature Fusion Network (SDF2Net), is proposed for PolSAR image classification.
The results indicate that the proposed approach demonstrates improvements in overallaccuracy, with a 1.3% and 0.8% enhancement for the AIRSAR datasets and a 0.5% improvement for the ESAR dataset.
- Score: 1.2349871196144497
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Polarimetric synthetic aperture radar (PolSAR) images encompass valuable
information that can facilitate extensive land cover interpretation and
generate diverse output products. Extracting meaningful features from PolSAR
data poses challenges distinct from those encountered in optical imagery. Deep
learning (DL) methods offer effective solutions for overcoming these challenges
in PolSAR feature extraction. Convolutional neural networks (CNNs) play a
crucial role in capturing PolSAR image characteristics by leveraging kernel
capabilities to consider local information and the complex-valued nature of
PolSAR data. In this study, a novel three-branch fusion of complex-valued CNN,
named the Shallow to Deep Feature Fusion Network (SDF2Net), is proposed for
PolSAR image classification. To validate the performance of the proposed
method, classification results are compared against multiple state-of-the-art
approaches using the airborne synthetic aperture radar (AIRSAR) datasets of
Flevoland and San Francisco, as well as the ESAR Oberpfaffenhofen dataset. The
results indicate that the proposed approach demonstrates improvements in
overallaccuracy, with a 1.3% and 0.8% enhancement for the AIRSAR datasets and a
0.5% improvement for the ESAR dataset. Analyses conducted on the Flevoland data
underscore the effectiveness of the SDF2Net model, revealing a promising
overall accuracy of 96.01% even with only a 1% sampling ratio.
Related papers
- X-Fake: Juggling Utility Evaluation and Explanation of Simulated SAR Images [49.546627070454456]
The distribution inconsistency between real and simulated data is the main obstacle that influences the utility of simulated SAR images.
We propose a novel trustworthy utility evaluation framework with a counterfactual explanation for simulated SAR images for the first time, denoted as X-Fake.
The proposed framework is validated on four simulated SAR image datasets obtained from electromagnetic models and generative artificial intelligence approaches.
arXiv Detail & Related papers (2024-07-28T09:27:53Z) - Heterogeneous Network Based Contrastive Learning Method for PolSAR Land Cover Classification [18.37842655634498]
Supervised learning (SL) requires a large amount of labeled PolSAR data with high quality to achieve better performance.
This article proposes a Heterogeneous Network based Contrastive Learning method(HCLNet)
It aims to learn high-level representation from unlabeled PolSAR data for few-shot classification according to multi-features and superpixels.
arXiv Detail & Related papers (2024-03-29T01:05:23Z) - SARDet-100K: Towards Open-Source Benchmark and ToolKit for Large-Scale SAR Object Detection [79.23689506129733]
We establish a new benchmark dataset and an open-source method for large-scale SAR object detection.
Our dataset, SARDet-100K, is a result of intense surveying, collecting, and standardizing 10 existing SAR detection datasets.
To the best of our knowledge, SARDet-100K is the first COCO-level large-scale multi-class SAR object detection dataset ever created.
arXiv Detail & Related papers (2024-03-11T09:20:40Z) - Innovative Horizons in Aerial Imagery: LSKNet Meets DiffusionDet for
Advanced Object Detection [55.2480439325792]
We present an in-depth evaluation of an object detection model that integrates the LSKNet backbone with the DiffusionDet head.
The proposed model achieves a mean average precision (MAP) of approximately 45.7%, which is a significant improvement.
This advancement underscores the effectiveness of the proposed modifications and sets a new benchmark in aerial image analysis.
arXiv Detail & Related papers (2023-11-21T19:49:13Z) - Leveraging Neural Radiance Fields for Uncertainty-Aware Visual
Localization [56.95046107046027]
We propose to leverage Neural Radiance Fields (NeRF) to generate training samples for scene coordinate regression.
Despite NeRF's efficiency in rendering, many of the rendered data are polluted by artifacts or only contain minimal information gain.
arXiv Detail & Related papers (2023-10-10T20:11:13Z) - SAR-NeRF: Neural Radiance Fields for Synthetic Aperture Radar Multi-View
Representation [7.907504142396784]
This study combines SAR imaging mechanisms with neural networks to propose a novel NeRF model for SAR image generation.
SAR-NeRF is constructed to learn the distribution of attenuation coefficients and scattering intensities of voxels.
It is found that SAR-NeRF augumented dataset can significantly improve SAR target classification performance under few-shot learning setup.
arXiv Detail & Related papers (2023-07-11T07:37:56Z) - SAR-ShipNet: SAR-Ship Detection Neural Network via Bidirectional
Coordinate Attention and Multi-resolution Feature Fusion [7.323279438948967]
This paper studies a practically meaningful ship detection problem from synthetic aperture radar (SAR) images by the neural network.
We propose a SAR-ship detection neural network (call SAR-ShipNet for short), by newly developing Bidirectional Coordinate Attention (BCA) and Multi-resolution Feature Fusion (MRF) based on CenterNet.
Experimental results on the public SAR-Ship dataset show that our SAR-ShipNet achieves competitive advantages in both speed and accuracy.
arXiv Detail & Related papers (2022-03-29T12:27:04Z) - Context-Preserving Instance-Level Augmentation and Deformable
Convolution Networks for SAR Ship Detection [50.53262868498824]
Shape deformation of targets in SAR image due to random orientation and partial information loss is an essential challenge in SAR ship detection.
We propose a data augmentation method to train a deep network that is robust to partial information loss within the targets.
arXiv Detail & Related papers (2022-02-14T07:01:01Z) - Fully Polarimetric SAR and Single-Polarization SAR Image Fusion Network [8.227845719405051]
We propose a fully polarimetric synthetic aperture radar (PolSAR) images and single-polarization synthetic aperture radar SAR (SinSAR) images fusion network.
Experiments on polarimetric decomposition and polarimetric signature show that it maintains polarimetric information well.
arXiv Detail & Related papers (2021-07-18T03:51:04Z) - The QXS-SAROPT Dataset for Deep Learning in SAR-Optical Data Fusion [14.45289690639374]
We publish the QXS-SAROPT dataset to foster deep learning research in SAR-optical data fusion.
We show exemplary results for two representative applications, namely SAR-optical image matching and SAR ship detection boosted by cross-modal information from optical images.
arXiv Detail & Related papers (2021-03-15T10:22:46Z) - Lightweight Single-Image Super-Resolution Network with Attentive
Auxiliary Feature Learning [73.75457731689858]
We develop a computation efficient yet accurate network based on the proposed attentive auxiliary features (A$2$F) for SISR.
Experimental results on large-scale dataset demonstrate the effectiveness of the proposed model against the state-of-the-art (SOTA) SR methods.
arXiv Detail & Related papers (2020-11-13T06:01:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.