High-Resolution Cloud Removal with Multi-Modal and Multi-Resolution Data
Fusion: A New Baseline and Benchmark
- URL: http://arxiv.org/abs/2301.03432v1
- Date: Mon, 9 Jan 2023 15:31:28 GMT
- Title: High-Resolution Cloud Removal with Multi-Modal and Multi-Resolution Data
Fusion: A New Baseline and Benchmark
- Authors: Fang Xu, Yilei Shi, Patrick Ebel, Wen Yang and Xiao Xiang Zhu
- Abstract summary: We introduce Planet-CR, a benchmark dataset for high-resolution cloud removal with multi-modal and multi-resolution data fusion.
The proposed Align-CR method gives the best performance in both visual recovery quality and semantic recovery quality.
- Score: 20.286072299313286
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we introduce Planet-CR, a benchmark dataset for
high-resolution cloud removal with multi-modal and multi-resolution data
fusion. Planet-CR is the first public dataset for cloud removal to feature
globally sampled high resolution optical observations, in combination with
paired radar measurements as well as pixel-level land cover annotations. It
provides solid basis for exhaustive evaluation in terms of generating visually
pleasing textures and semantically meaningful structures. With this dataset, we
consider the problem of cloud removal in high resolution optical remote sensing
imagery by integrating multi-modal and multi-resolution information. Existing
multi-modal data fusion based methods, which assume the image pairs are aligned
pixel-to-pixel, are hence not appropriate for this problem. To this end, we
design a new baseline named Align-CR to perform the low-resolution SAR image
guided high-resolution optical image cloud removal. It implicitly aligns the
multi-modal and multi-resolution data during the reconstruction process to
promote the cloud removal performance. The experimental results demonstrate
that the proposed Align-CR method gives the best performance in both visual
recovery quality and semantic recovery quality. The project is available at
https://github.com/zhu-xlab/Planet-CR, and hope this will inspire future
research.
Related papers
- Multi-view Aggregation Network for Dichotomous Image Segmentation [76.75904424539543]
Dichotomous Image (DIS) has recently emerged towards high-precision object segmentation from high-resolution natural images.
Existing methods rely on tedious multiple encoder-decoder streams and stages to gradually complete the global localization and local refinement.
Inspired by it, we model DIS as a multi-view object perception problem and provide a parsimonious multi-view aggregation network (MVANet)
Experiments on the popular DIS-5K dataset show that our MVANet significantly outperforms state-of-the-art methods in both accuracy and speed.
arXiv Detail & Related papers (2024-04-11T03:00:00Z) - Diffusion Enhancement for Cloud Removal in Ultra-Resolution Remote
Sensing Imagery [48.14610248492785]
Cloud layers severely compromise the quality and effectiveness of optical remote sensing (RS) images.
Existing deep-learning (DL)-based Cloud Removal (CR) techniques encounter difficulties in accurately reconstructing the original visual authenticity and detailed semantic content of the images.
This work proposes enhancements at the data and methodology fronts to tackle this challenge.
arXiv Detail & Related papers (2024-01-25T13:14:17Z) - Learning Enriched Features for Fast Image Restoration and Enhancement [166.17296369600774]
This paper presents a holistic goal of maintaining spatially-precise high-resolution representations through the entire network.
We learn an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details.
Our approach achieves state-of-the-art results for a variety of image processing tasks, including defocus deblurring, image denoising, super-resolution, and image enhancement.
arXiv Detail & Related papers (2022-04-19T17:59:45Z) - A new public Alsat-2B dataset for single-image super-resolution [1.284647943889634]
The paper introduces a novel public remote sensing dataset (Alsat2B) of low and high spatial resolution images (10m and 2.5m respectively) for the single-image super-resolution task.
The high-resolution images are obtained through pan-sharpening.
The obtained results reveal that the proposed scheme is promising and highlight the challenges in the dataset.
arXiv Detail & Related papers (2021-03-21T10:47:38Z) - Deep Burst Super-Resolution [165.90445859851448]
We propose a novel architecture for the burst super-resolution task.
Our network takes multiple noisy RAW images as input, and generates a denoised, super-resolved RGB image as output.
In order to enable training and evaluation on real-world data, we additionally introduce the BurstSR dataset.
arXiv Detail & Related papers (2021-01-26T18:57:21Z) - Learning Enriched Features for Real Image Restoration and Enhancement [166.17296369600774]
convolutional neural networks (CNNs) have achieved dramatic improvements over conventional approaches for image restoration task.
We present a novel architecture with the collective goals of maintaining spatially-precise high-resolution representations through the entire network.
Our approach learns an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details.
arXiv Detail & Related papers (2020-03-15T11:04:30Z) - Gated Fusion Network for Degraded Image Super Resolution [78.67168802945069]
We propose a dual-branch convolutional neural network to extract base features and recovered features separately.
By decomposing the feature extraction step into two task-independent streams, the dual-branch model can facilitate the training process.
arXiv Detail & Related papers (2020-03-02T13:28:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.