Fully Polarimetric SAR and Single-Polarization SAR Image Fusion Network
- URL: http://arxiv.org/abs/2107.08355v1
- Date: Sun, 18 Jul 2021 03:51:04 GMT
- Title: Fully Polarimetric SAR and Single-Polarization SAR Image Fusion Network
- Authors: Liupeng Lin, Jie Li, Huanfeng Shen, Lingli Zhao, Qiangqiang Yuan,
Xinghua Li
- Abstract summary: We propose a fully polarimetric synthetic aperture radar (PolSAR) images and single-polarization synthetic aperture radar SAR (SinSAR) images fusion network.
Experiments on polarimetric decomposition and polarimetric signature show that it maintains polarimetric information well.
- Score: 8.227845719405051
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The data fusion technology aims to aggregate the characteristics of different
data and obtain products with multiple data advantages. To solves the problem
of reduced resolution of PolSAR images due to system limitations, we propose a
fully polarimetric synthetic aperture radar (PolSAR) images and
single-polarization synthetic aperture radar SAR (SinSAR) images fusion network
to generate high-resolution PolSAR (HR-PolSAR) images. To take advantage of the
polarimetric information of the low-resolution PolSAR (LR-PolSAR) image and the
spatial information of the high-resolution single-polarization SAR (HR-SinSAR)
image, we propose a fusion framework for joint LR-PolSAR image and HR-SinSAR
image and design a cross-attention mechanism to extract features from the joint
input data. Besides, based on the physical imaging mechanism, we designed the
PolSAR polarimetric loss function for constrained network training. The
experimental results confirm the superiority of fusion network over traditional
algorithms. The average PSNR is increased by more than 3.6db, and the average
MAE is reduced to less than 0.07. Experiments on polarimetric decomposition and
polarimetric signature show that it maintains polarimetric information well.
Related papers
- Deep Learning Based Speckle Filtering for Polarimetric SAR Images. Application to Sentinel-1 [51.404644401997736]
We propose a complete framework to remove speckle in polarimetric SAR images using a convolutional neural network.
Experiments show that the proposed approach offers exceptional results in both speckle reduction and resolution preservation.
arXiv Detail & Related papers (2024-08-28T10:07:17Z) - Conditional Brownian Bridge Diffusion Model for VHR SAR to Optical Image Translation [5.578820789388206]
This paper introduces a conditional image-to-image translation approach based on Brownian Bridge Diffusion Model (BBDM)
We conducted comprehensive experiments on the MSAW dataset, a paired SAR and optical images collection of 0.5m Very-High-Resolution (VHR)
arXiv Detail & Related papers (2024-08-15T05:43:46Z) - SDF2Net: Shallow to Deep Feature Fusion Network for PolSAR Image
Classification [1.2349871196144497]
Convolutional neural networks (CNNs) play a crucial role in capturing PolSAR image characteristics.
In this study, a novel three-branch fusion of complex-valued CNN, named the Shallow to Deep Feature Fusion Network (SDF2Net), is proposed for PolSAR image classification.
The results indicate that the proposed approach demonstrates improvements in overallaccuracy, with a 1.3% and 0.8% enhancement for the AIRSAR datasets and a 0.5% improvement for the ESAR dataset.
arXiv Detail & Related papers (2024-02-27T16:46:21Z) - PolMERLIN: Self-Supervised Polarimetric Complex SAR Image Despeckling
with Masked Networks [2.580765958706854]
Despeckling is a crucial noise reduction task in improving the quality of synthetic aperture radar (SAR) images.
Existing methods deal solely with single-polarization images and cannot handle the multi-polarization images captured by modern satellites.
We propose a novel self-supervised despeckling approach called channel masking, which exploits the relationship between polarizations.
arXiv Detail & Related papers (2024-01-15T07:06:36Z) - ESSAformer: Efficient Transformer for Hyperspectral Image
Super-resolution [76.7408734079706]
Single hyperspectral image super-resolution (single-HSI-SR) aims to restore a high-resolution hyperspectral image from a low-resolution observation.
We propose ESSAformer, an ESSA attention-embedded Transformer network for single-HSI-SR with an iterative refining structure.
arXiv Detail & Related papers (2023-07-26T07:45:14Z) - Super-Resolution by Predicting Offsets: An Ultra-Efficient
Super-Resolution Network for Rasterized Images [47.684307267915024]
We present a new method for real-time SR for computer graphics, namely Super-Resolution by Predicting Offsets (SRPO)
Our algorithm divides the image into two parts for processing, i.e., sharp edges and flatter areas.
Experiments show that the proposed SRPO can achieve superior visual effects at a smaller computational cost than the existing state-of-the-art methods.
arXiv Detail & Related papers (2022-10-09T08:16:36Z) - The QXS-SAROPT Dataset for Deep Learning in SAR-Optical Data Fusion [14.45289690639374]
We publish the QXS-SAROPT dataset to foster deep learning research in SAR-optical data fusion.
We show exemplary results for two representative applications, namely SAR-optical image matching and SAR ship detection boosted by cross-modal information from optical images.
arXiv Detail & Related papers (2021-03-15T10:22:46Z) - Boosting Image Super-Resolution Via Fusion of Complementary Information
Captured by Multi-Modal Sensors [21.264746234523678]
Image Super-Resolution (SR) provides a promising technique to enhance the image quality of low-resolution optical sensors.
In this paper, we attempt to leverage complementary information from a low-cost channel (visible/depth) to boost image quality of an expensive channel (thermal) using fewer parameters.
arXiv Detail & Related papers (2020-12-07T02:15:28Z) - Depth Estimation from Monocular Images and Sparse Radar Data [93.70524512061318]
In this paper, we explore the possibility of achieving a more accurate depth estimation by fusing monocular images and Radar points using a deep neural network.
We find that the noise existing in Radar measurements is one of the main key reasons that prevents one from applying the existing fusion methods.
The experiments are conducted on the nuScenes dataset, which is one of the first datasets which features Camera, Radar, and LiDAR recordings in diverse scenes and weather conditions.
arXiv Detail & Related papers (2020-09-30T19:01:33Z) - Hyperspectral Image Super-resolution via Deep Progressive Zero-centric
Residual Learning [62.52242684874278]
Cross-modality distribution of spatial and spectral information makes the problem challenging.
We propose a novel textitlightweight deep neural network-based framework, namely PZRes-Net.
Our framework learns a high resolution and textitzero-centric residual image, which contains high-frequency spatial details of the scene.
arXiv Detail & Related papers (2020-06-18T06:32:11Z) - Learning Spatial-Spectral Prior for Super-Resolution of Hyperspectral
Imagery [79.69449412334188]
In this paper, we investigate how to adapt state-of-the-art residual learning based single gray/RGB image super-resolution approaches.
We introduce a spatial-spectral prior network (SSPN) to fully exploit the spatial information and the correlation between the spectra of the hyperspectral data.
Experimental results on some hyperspectral images demonstrate that the proposed SSPSR method enhances the details of the recovered high-resolution hyperspectral images.
arXiv Detail & Related papers (2020-05-18T14:25:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.