SIPSA-Net: Shift-Invariant Pan Sharpening with Moving Object Alignment
for Satellite Imagery
- URL: http://arxiv.org/abs/2105.02400v1
- Date: Thu, 6 May 2021 02:27:50 GMT
- Title: SIPSA-Net: Shift-Invariant Pan Sharpening with Moving Object Alignment
for Satellite Imagery
- Authors: Jaehyup Lee, Soomin Seo and Munchurl Kim
- Abstract summary: Pan-sharpening is a process of merging a high-resolution (HR) panchromatic (PAN) image and its corresponding low-resolution (LR) multi-spectral (MS) image to create an HR-MS and pan-sharpened image.
Due to the different sensors' locations, characteristics and acquisition time, PAN and MS image pairs often tend to have various amounts of misalignment.
We propose shift-invariant pan-sharpening with moving object alignment (SIPSA-Net) which is the first method to take into account such large misalignment of moving object regions for PAN sharpening.
- Score: 36.24121979886052
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Pan-sharpening is a process of merging a high-resolution (HR) panchromatic
(PAN) image and its corresponding low-resolution (LR) multi-spectral (MS) image
to create an HR-MS and pan-sharpened image. However, due to the different
sensors' locations, characteristics and acquisition time, PAN and MS image
pairs often tend to have various amounts of misalignment. Conventional
deep-learning-based methods that were trained with such misaligned PAN-MS image
pairs suffer from diverse artifacts such as double-edge and blur artifacts in
the resultant PAN-sharpened images. In this paper, we propose a novel framework
called shift-invariant pan-sharpening with moving object alignment (SIPSA-Net)
which is the first method to take into account such large misalignment of
moving object regions for PAN sharpening. The SISPA-Net has a feature alignment
module (FAM) that can adjust one feature to be aligned to another feature, even
between the two different PAN and MS domains. For better alignment in
pan-sharpened images, a shift-invariant spectral loss is newly designed, which
ignores the inherent misalignment in the original MS input, thereby having the
same effect as optimizing the spectral loss with a well-aligned MS image.
Extensive experimental results show that our SIPSA-Net can generate
pan-sharpened images with remarkable improvements in terms of visual quality
and alignment, compared to the state-of-the-art methods.
Related papers
- PAN-Crafter: Learning Modality-Consistent Alignment for PAN-Sharpening [20.43260906326048]
We propose PAN-Crafter, a modality-consistent alignment framework.<n>At its core, Modality-Adaptive Reconstruction (MARs) enables a single network to jointly reconstruct HRMS and PAN images.<n> experiments on multiple benchmark datasets demonstrate that our PAN-Crafter outperforms the most recent state-of-the-art method in all metrics.
arXiv Detail & Related papers (2025-05-29T11:46:21Z) - Feature Alignment with Equivariant Convolutions for Burst Image Super-Resolution [52.55429225242423]
We propose a novel framework for Burst Image Super-Resolution (BISR), featuring an equivariant convolution-based alignment.
This enables the alignment transformation to be learned via explicit supervision in the image domain and easily applied in the feature domain.
Experiments on BISR benchmarks show the superior performance of our approach in both quantitative metrics and visual quality.
arXiv Detail & Related papers (2025-03-11T11:13:10Z) - Multi-Head Attention Residual Unfolded Network for Model-Based Pansharpening [2.874893537471256]
Unfolding fusion methods integrate the powerful representation capabilities of deep learning with the robustness of model-based approaches.
In this paper, we propose a model-based deep unfolded method for satellite image fusion.
Experimental results on PRISMA, Quickbird, and WorldView2 datasets demonstrate the superior performance of our method.
arXiv Detail & Related papers (2024-09-04T13:05:00Z) - CMT: Cross Modulation Transformer with Hybrid Loss for Pansharpening [14.459280238141849]
Pansharpening aims to enhance remote sensing image (RSI) quality by merging high-resolution panchromatic (PAN) with multispectral (MS) images.
Prior techniques struggled to optimally fuse PAN and MS images for enhanced spatial and spectral information.
We present the Cross Modulation Transformer (CMT), a pioneering method that modifies the attention mechanism.
arXiv Detail & Related papers (2024-04-01T13:55:44Z) - Parallax-Tolerant Unsupervised Deep Image Stitching [57.76737888499145]
We propose UDIS++, a parallax-tolerant unsupervised deep image stitching technique.
First, we propose a robust and flexible warp to model the image registration from global homography to local thin-plate spline motion.
To further eliminate the parallax artifacts, we propose to composite the stitched image seamlessly by unsupervised learning for seam-driven composition masks.
arXiv Detail & Related papers (2023-02-16T10:40:55Z) - Panchromatic and Multispectral Image Fusion via Alternating Reverse
Filtering Network [23.74842833472348]
Pan-sharpening refers to super-resolve the low-resolution (LR) multi-spectral (MS) images in the spatial domain.
We present a simple yet effective textitalternating reverse filtering network for pan-sharpening.
arXiv Detail & Related papers (2022-10-15T03:56:05Z) - PC-GANs: Progressive Compensation Generative Adversarial Networks for
Pan-sharpening [50.943080184828524]
We propose a novel two-step model for pan-sharpening that sharpens the MS image through the progressive compensation of the spatial and spectral information.
The whole model is composed of triple GANs, and based on the specific architecture, a joint compensation loss function is designed to enable the triple GANs to be trained simultaneously.
arXiv Detail & Related papers (2022-07-29T03:09:21Z) - PanFormer: a Transformer Based Model for Pan-sharpening [49.45405879193866]
Pan-sharpening aims at producing a high-resolution (HR) multi-spectral (MS) image from a low-resolution (LR) multi-spectral (MS) image and its corresponding panchromatic (PAN) image acquired by a same satellite.
Inspired by a new fashion in recent deep learning community, we propose a novel Transformer based model for pan-sharpening.
arXiv Detail & Related papers (2022-03-06T09:22:20Z) - HyperTransformer: A Textural and Spectral Feature Fusion Transformer for
Pansharpening [60.89777029184023]
Pansharpening aims to fuse a registered high-resolution panchromatic image (PAN) with a low-resolution hyperspectral image (LR-HSI) to generate an enhanced HSI with high spectral and spatial resolution.
Existing pansharpening approaches neglect using an attention mechanism to transfer HR texture features from PAN to LR-HSI features, resulting in spatial and spectral distortions.
We present a novel attention mechanism for pansharpening called HyperTransformer, in which features of LR-HSI and PAN are formulated as queries and keys in a transformer, respectively.
arXiv Detail & Related papers (2022-03-04T18:59:08Z) - LDP-Net: An Unsupervised Pansharpening Network Based on Learnable
Degradation Processes [18.139096037746672]
We propose a novel unsupervised network based on learnable degradation processes, dubbed as LDP-Net.
A reblurring block and a graying block are designed to learn the corresponding degradation processes, respectively.
Experiments on Worldview2 and Worldview3 images demonstrate that our proposed LDP-Net can fuse PAN and LRMS images effectively without the help of HRMS samples.
arXiv Detail & Related papers (2021-11-24T13:21:22Z) - Image Inpainting with Edge-guided Learnable Bidirectional Attention Maps [85.67745220834718]
We present an edge-guided learnable bidirectional attention map (Edge-LBAM) for improving image inpainting of irregular holes.
Our Edge-LBAM method contains dual procedures,including structure-aware mask-updating guided by predict edges.
Extensive experiments show that our Edge-LBAM is effective in generating coherent image structures and preventing color discrepancy and blurriness.
arXiv Detail & Related papers (2021-04-25T07:25:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.