Fully Polarimetric SAR and Single-Polarization SAR Image Fusion Network
- URL: http://arxiv.org/abs/2107.08355v1
- Date: Sun, 18 Jul 2021 03:51:04 GMT
- Title: Fully Polarimetric SAR and Single-Polarization SAR Image Fusion Network
- Authors: Liupeng Lin, Jie Li, Huanfeng Shen, Lingli Zhao, Qiangqiang Yuan,
Xinghua Li
- Abstract summary: We propose a fully polarimetric synthetic aperture radar (PolSAR) images and single-polarization synthetic aperture radar SAR (SinSAR) images fusion network.
Experiments on polarimetric decomposition and polarimetric signature show that it maintains polarimetric information well.
- Score: 8.227845719405051
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The data fusion technology aims to aggregate the characteristics of different
data and obtain products with multiple data advantages. To solves the problem
of reduced resolution of PolSAR images due to system limitations, we propose a
fully polarimetric synthetic aperture radar (PolSAR) images and
single-polarization synthetic aperture radar SAR (SinSAR) images fusion network
to generate high-resolution PolSAR (HR-PolSAR) images. To take advantage of the
polarimetric information of the low-resolution PolSAR (LR-PolSAR) image and the
spatial information of the high-resolution single-polarization SAR (HR-SinSAR)
image, we propose a fusion framework for joint LR-PolSAR image and HR-SinSAR
image and design a cross-attention mechanism to extract features from the joint
input data. Besides, based on the physical imaging mechanism, we designed the
PolSAR polarimetric loss function for constrained network training. The
experimental results confirm the superiority of fusion network over traditional
algorithms. The average PSNR is increased by more than 3.6db, and the average
MAE is reduced to less than 0.07. Experiments on polarimetric decomposition and
polarimetric signature show that it maintains polarimetric information well.
Related papers
- Multi-Resolution SAR and Optical Remote Sensing Image Registration Methods: A Review, Datasets, and Future Perspectives [13.749888089968373]
Synthetic Aperture Radar (SAR) and optical image registration is essential for remote sensing data fusion.
As image resolution increases, fine SAR textures become more significant, leading to alignment issues and 3D spatial discrepancies.
The MultiResSAR dataset was created, containing over 10k pairs of multi-source, multi-resolution, and multi-scene SAR and optical images.
arXiv Detail & Related papers (2025-02-03T02:51:30Z) - Cloud Removal With PolSAR-Optical Data Fusion Using A Two-Flow Residual Network [9.529237717137121]
Reconstructing cloud-free optical images has become a major task in recent years.
This paper presents a two-flow Polarimetric Synthetic Aperture Radar (PolSAR)-Optical data fusion cloud removal algorithm.
arXiv Detail & Related papers (2025-01-14T07:35:14Z) - PolSAM: Polarimetric Scattering Mechanism Informed Segment Anything Model [76.95536611263356]
PolSAR data presents unique challenges due to its rich and complex characteristics.
Existing data representations, such as complex-valued data, polarimetric features, and amplitude images, are widely used.
Most feature extraction networks for PolSAR are small, limiting their ability to capture features effectively.
We propose the Polarimetric Scattering Mechanism-Informed SAM (PolSAM), an enhanced Segment Anything Model (SAM) that integrates domain-specific scattering characteristics and a novel prompt generation strategy.
arXiv Detail & Related papers (2024-12-17T09:59:53Z) - Deep Learning Based Speckle Filtering for Polarimetric SAR Images. Application to Sentinel-1 [51.404644401997736]
We propose a complete framework to remove speckle in polarimetric SAR images using a convolutional neural network.
Experiments show that the proposed approach offers exceptional results in both speckle reduction and resolution preservation.
arXiv Detail & Related papers (2024-08-28T10:07:17Z) - Conditional Brownian Bridge Diffusion Model for VHR SAR to Optical Image Translation [5.578820789388206]
This paper introduces a conditional image-to-image translation approach based on Brownian Bridge Diffusion Model (BBDM)
We conducted comprehensive experiments on the MSAW dataset, a paired SAR and optical images collection of 0.5m Very-High-Resolution (VHR)
arXiv Detail & Related papers (2024-08-15T05:43:46Z) - PolMERLIN: Self-Supervised Polarimetric Complex SAR Image Despeckling
with Masked Networks [2.580765958706854]
Despeckling is a crucial noise reduction task in improving the quality of synthetic aperture radar (SAR) images.
Existing methods deal solely with single-polarization images and cannot handle the multi-polarization images captured by modern satellites.
We propose a novel self-supervised despeckling approach called channel masking, which exploits the relationship between polarizations.
arXiv Detail & Related papers (2024-01-15T07:06:36Z) - ESSAformer: Efficient Transformer for Hyperspectral Image
Super-resolution [76.7408734079706]
Single hyperspectral image super-resolution (single-HSI-SR) aims to restore a high-resolution hyperspectral image from a low-resolution observation.
We propose ESSAformer, an ESSA attention-embedded Transformer network for single-HSI-SR with an iterative refining structure.
arXiv Detail & Related papers (2023-07-26T07:45:14Z) - Boosting Image Super-Resolution Via Fusion of Complementary Information
Captured by Multi-Modal Sensors [21.264746234523678]
Image Super-Resolution (SR) provides a promising technique to enhance the image quality of low-resolution optical sensors.
In this paper, we attempt to leverage complementary information from a low-cost channel (visible/depth) to boost image quality of an expensive channel (thermal) using fewer parameters.
arXiv Detail & Related papers (2020-12-07T02:15:28Z) - Depth Estimation from Monocular Images and Sparse Radar Data [93.70524512061318]
In this paper, we explore the possibility of achieving a more accurate depth estimation by fusing monocular images and Radar points using a deep neural network.
We find that the noise existing in Radar measurements is one of the main key reasons that prevents one from applying the existing fusion methods.
The experiments are conducted on the nuScenes dataset, which is one of the first datasets which features Camera, Radar, and LiDAR recordings in diverse scenes and weather conditions.
arXiv Detail & Related papers (2020-09-30T19:01:33Z) - Hyperspectral Image Super-resolution via Deep Progressive Zero-centric
Residual Learning [62.52242684874278]
Cross-modality distribution of spatial and spectral information makes the problem challenging.
We propose a novel textitlightweight deep neural network-based framework, namely PZRes-Net.
Our framework learns a high resolution and textitzero-centric residual image, which contains high-frequency spatial details of the scene.
arXiv Detail & Related papers (2020-06-18T06:32:11Z) - Learning Spatial-Spectral Prior for Super-Resolution of Hyperspectral
Imagery [79.69449412334188]
In this paper, we investigate how to adapt state-of-the-art residual learning based single gray/RGB image super-resolution approaches.
We introduce a spatial-spectral prior network (SSPN) to fully exploit the spatial information and the correlation between the spectra of the hyperspectral data.
Experimental results on some hyperspectral images demonstrate that the proposed SSPSR method enhances the details of the recovered high-resolution hyperspectral images.
arXiv Detail & Related papers (2020-05-18T14:25:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.