High-Resolution Cloud Removal with Multi-Modal and Multi-Resolution Data
Fusion: A New Baseline and Benchmark
- URL: http://arxiv.org/abs/2301.03432v1
- Date: Mon, 9 Jan 2023 15:31:28 GMT
- Title: High-Resolution Cloud Removal with Multi-Modal and Multi-Resolution Data
Fusion: A New Baseline and Benchmark
- Authors: Fang Xu, Yilei Shi, Patrick Ebel, Wen Yang and Xiao Xiang Zhu
- Abstract summary: We introduce Planet-CR, a benchmark dataset for high-resolution cloud removal with multi-modal and multi-resolution data fusion.
The proposed Align-CR method gives the best performance in both visual recovery quality and semantic recovery quality.
- Score: 20.286072299313286
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we introduce Planet-CR, a benchmark dataset for
high-resolution cloud removal with multi-modal and multi-resolution data
fusion. Planet-CR is the first public dataset for cloud removal to feature
globally sampled high resolution optical observations, in combination with
paired radar measurements as well as pixel-level land cover annotations. It
provides solid basis for exhaustive evaluation in terms of generating visually
pleasing textures and semantically meaningful structures. With this dataset, we
consider the problem of cloud removal in high resolution optical remote sensing
imagery by integrating multi-modal and multi-resolution information. Existing
multi-modal data fusion based methods, which assume the image pairs are aligned
pixel-to-pixel, are hence not appropriate for this problem. To this end, we
design a new baseline named Align-CR to perform the low-resolution SAR image
guided high-resolution optical image cloud removal. It implicitly aligns the
multi-modal and multi-resolution data during the reconstruction process to
promote the cloud removal performance. The experimental results demonstrate
that the proposed Align-CR method gives the best performance in both visual
recovery quality and semantic recovery quality. The project is available at
https://github.com/zhu-xlab/Planet-CR, and hope this will inspire future
research.
Related papers
- Diffusion Enhancement for Cloud Removal in Ultra-Resolution Remote
Sensing Imagery [48.14610248492785]
Cloud layers severely compromise the quality and effectiveness of optical remote sensing (RS) images.
Existing deep-learning (DL)-based Cloud Removal (CR) techniques encounter difficulties in accurately reconstructing the original visual authenticity and detailed semantic content of the images.
This work proposes enhancements at the data and methodology fronts to tackle this challenge.
arXiv Detail & Related papers (2024-01-25T13:14:17Z) - Searching a Compact Architecture for Robust Multi-Exposure Image Fusion [55.37210629454589]
Two major stumbling blocks hinder the development, including pixel misalignment and inefficient inference.
This study introduces an architecture search-based paradigm incorporating self-alignment and detail repletion modules for robust multi-exposure image fusion.
The proposed method outperforms various competitive schemes, achieving a noteworthy 3.19% improvement in PSNR for general scenarios and an impressive 23.5% enhancement in misaligned scenarios.
arXiv Detail & Related papers (2023-05-20T17:01:52Z) - Learning Enriched Features for Fast Image Restoration and Enhancement [166.17296369600774]
This paper presents a holistic goal of maintaining spatially-precise high-resolution representations through the entire network.
We learn an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details.
Our approach achieves state-of-the-art results for a variety of image processing tasks, including defocus deblurring, image denoising, super-resolution, and image enhancement.
arXiv Detail & Related papers (2022-04-19T17:59:45Z) - High-resolution Depth Maps Imaging via Attention-based Hierarchical
Multi-modal Fusion [84.24973877109181]
We propose a novel attention-based hierarchical multi-modal fusion network for guided DSR.
We show that our approach outperforms state-of-the-art methods in terms of reconstruction accuracy, running speed and memory efficiency.
arXiv Detail & Related papers (2021-04-04T03:28:33Z) - Learning Enriched Features for Real Image Restoration and Enhancement [166.17296369600774]
convolutional neural networks (CNNs) have achieved dramatic improvements over conventional approaches for image restoration task.
We present a novel architecture with the collective goals of maintaining spatially-precise high-resolution representations through the entire network.
Our approach learns an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details.
arXiv Detail & Related papers (2020-03-15T11:04:30Z) - Gated Fusion Network for Degraded Image Super Resolution [78.67168802945069]
We propose a dual-branch convolutional neural network to extract base features and recovered features separately.
By decomposing the feature extraction step into two task-independent streams, the dual-branch model can facilitate the training process.
arXiv Detail & Related papers (2020-03-02T13:28:32Z) - Multimodal Deep Unfolding for Guided Image Super-Resolution [23.48305854574444]
Deep learning methods rely on training data to learn an end-to-end mapping from a low-resolution input to a high-resolution output.
We propose a multimodal deep learning design that incorporates sparse priors and allows the effective integration of information from another image modality into the network architecture.
Our solution relies on a novel deep unfolding operator, performing steps similar to an iterative algorithm for convolutional sparse coding with side information.
arXiv Detail & Related papers (2020-01-21T14:41:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.