Attentive Contextual Attention for Cloud Removal
- URL: http://arxiv.org/abs/2411.13042v1
- Date: Wed, 20 Nov 2024 05:16:31 GMT
- Title: Attentive Contextual Attention for Cloud Removal
- Authors: Wenli Huang, Ye Deng, Yang Wu, Jinjun Wang,
- Abstract summary: Cloud cover can significantly hinder the use of remote sensing images for Earth observation.
Deep learning strategies have shown strong potential in restoring cloud-obscured areas.
We introduce a novel approach named Attentive Contextual Attention (AC-Attention)
- Score: 16.273117614996586
- License:
- Abstract: Cloud cover can significantly hinder the use of remote sensing images for Earth observation, prompting urgent advancements in cloud removal technology. Recently, deep learning strategies have shown strong potential in restoring cloud-obscured areas. These methods utilize convolution to extract intricate local features and attention mechanisms to gather long-range information, improving the overall comprehension of the scene. However, a common drawback of these approaches is that the resulting images often suffer from blurriness, artifacts, and inconsistencies. This is partly because attention mechanisms apply weights to all features based on generalized similarity scores, which can inadvertently introduce noise and irrelevant details from cloud-covered areas. To overcome this limitation and better capture relevant distant context, we introduce a novel approach named Attentive Contextual Attention (AC-Attention). This method enhances conventional attention mechanisms by dynamically learning data-driven attentive selection scores, enabling it to filter out noise and irrelevant features effectively. By integrating the AC-Attention module into the DSen2-CR cloud removal framework, we significantly improve the model's ability to capture essential distant information, leading to more effective cloud removal. Our extensive evaluation of various datasets shows that our method outperforms existing ones regarding image reconstruction quality. Additionally, we conducted ablation studies by integrating AC-Attention into multiple existing methods and widely used network architectures. These studies demonstrate the effectiveness and adaptability of AC-Attention and reveal its ability to focus on relevant features, thereby improving the overall performance of the networks. The code is available at \url{https://github.com/huangwenwenlili/ACA-CRNet}.
Related papers
- Point Cloud Understanding via Attention-Driven Contrastive Learning [64.65145700121442]
Transformer-based models have advanced point cloud understanding by leveraging self-attention mechanisms.
PointACL is an attention-driven contrastive learning framework designed to address these limitations.
Our method employs an attention-driven dynamic masking strategy that guides the model to focus on under-attended regions.
arXiv Detail & Related papers (2024-11-22T05:41:00Z) - Improving Apple Object Detection with Occlusion-Enhanced Distillation [1.0049237739132246]
Apples growing in natural environments often face severe visual obstructions from leaves and branches.
We introduce a technique called "Occlusion-Enhanced Distillation" (OED) to regularize the learning of semantically aligned features on occluded datasets.
Our method significantly outperforms current state-of-the-art techniques through extensive comparative experiments.
arXiv Detail & Related papers (2024-09-03T03:11:48Z) - Distribution-aware Interactive Attention Network and Large-scale Cloud
Recognition Benchmark on FY-4A Satellite Image [24.09239785062109]
We develop a novel dataset for accurate cloud recognition.
We use domain adaptation methods to align 70,419 image-label pairs in terms of projection, temporal resolution, and spatial resolution.
We also introduce a Distribution-aware Interactive-Attention Network (DIAnet), which preserves pixel-level details through a high-resolution branch and a parallel cross-branch.
arXiv Detail & Related papers (2024-01-06T09:58:09Z) - Typhoon Intensity Prediction with Vision Transformer [51.84456610977905]
We introduce "Typhoon Intensity Transformer" (Tint) to predict typhoon intensity accurately across space and time.
Tint uses self-attention mechanisms with global receptive fields per layer.
Experiments on a publicly available typhoon benchmark validate the efficacy of Tint.
arXiv Detail & Related papers (2023-11-28T03:11:33Z) - UnCRtainTS: Uncertainty Quantification for Cloud Removal in Optical
Satellite Time Series [19.32220113046804]
We introduce UnCRtainTS, a method for multi-temporal cloud removal combining a novel attention-based architecture.
We show how the well-calibrated predicted uncertainties enable a precise control of the reconstruction quality.
arXiv Detail & Related papers (2023-04-11T19:27:18Z) - Interactive Feature Embedding for Infrared and Visible Image Fusion [94.77188069479155]
General deep learning-based methods for infrared and visible image fusion rely on the unsupervised mechanism for vital information retention.
We propose a novel interactive feature embedding in self-supervised learning framework for infrared and visible image fusion.
arXiv Detail & Related papers (2022-11-09T13:34:42Z) - AGO-Net: Association-Guided 3D Point Cloud Object Detection Network [86.10213302724085]
We propose a novel 3D detection framework that associates intact features for objects via domain adaptation.
We achieve new state-of-the-art performance on the KITTI 3D detection benchmark in both accuracy and speed.
arXiv Detail & Related papers (2022-08-24T16:54:38Z) - Video Salient Object Detection via Contrastive Features and Attention
Modules [106.33219760012048]
We propose a network with attention modules to learn contrastive features for video salient object detection.
A co-attention formulation is utilized to combine the low-level and high-level features.
We show that the proposed method requires less computation, and performs favorably against the state-of-the-art approaches.
arXiv Detail & Related papers (2021-11-03T17:40:32Z) - Unlocking Pixels for Reinforcement Learning via Implicit Attention [61.666538764049854]
We make use of new efficient attention algorithms, recently shown to be highly effective for Transformers.
This allows our attention-based controllers to scale to larger visual inputs, and facilitate the use of smaller patches.
In addition, we propose a new efficient algorithm approximating softmax attention with what we call hybrid random features.
arXiv Detail & Related papers (2021-02-08T17:00:26Z) - Hybrid Multiple Attention Network for Semantic Segmentation in Aerial
Images [24.35779077001839]
We propose a novel attention-based framework named Hybrid Multiple Attention Network (HMANet) to adaptively capture global correlations.
We introduce a simple yet effective region shuffle attention (RSA) module to reduce feature redundant and improve the efficiency of self-attention mechanism.
arXiv Detail & Related papers (2020-01-09T07:47:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.