Lightweight Structure-aware Transformer Network for VHR Remote Sensing
Image Change Detection
- URL: http://arxiv.org/abs/2306.01988v1
- Date: Sat, 3 Jun 2023 03:21:18 GMT
- Title: Lightweight Structure-aware Transformer Network for VHR Remote Sensing
Image Change Detection
- Authors: Tao Lei, Yetong Xu, Hailong Ning, Zhiyong Lv, Chongdan Min, Yaochu Jin
and Asoke K. Nandi
- Abstract summary: This Letter proposes a Lightweight Structure-aware Transformer (LSAT) network for RS image CD.
First, a Cross-dimension Interactive Self-attention (CISA) module with linear complexity is designed to replace the vanilla self-attention in visual Transformer.
Second, a Structure-aware Enhancement Module (SAEM) is designed to enhance difference features and edge detail information.
- Score: 15.391216316828354
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Popular Transformer networks have been successfully applied to remote sensing
(RS) image change detection (CD) identifications and achieve better results
than most convolutional neural networks (CNNs), but they still suffer from two
main problems. First, the computational complexity of the Transformer grows
quadratically with the increase of image spatial resolution, which is
unfavorable to very high-resolution (VHR) RS images. Second, these popular
Transformer networks tend to ignore the importance of fine-grained features,
which results in poor edge integrity and internal tightness for largely changed
objects and leads to the loss of small changed objects. To address the above
issues, this Letter proposes a Lightweight Structure-aware Transformer (LSAT)
network for RS image CD. The proposed LSAT has two advantages. First, a
Cross-dimension Interactive Self-attention (CISA) module with linear complexity
is designed to replace the vanilla self-attention in visual Transformer, which
effectively reduces the computational complexity while improving the feature
representation ability of the proposed LSAT. Second, a Structure-aware
Enhancement Module (SAEM) is designed to enhance difference features and edge
detail information, which can achieve double enhancement by difference
refinement and detail aggregation so as to obtain fine-grained features of
bi-temporal RS images. Experimental results show that the proposed LSAT
achieves significant improvement in detection accuracy and offers a better
tradeoff between accuracy and computational costs than most state-of-the-art CD
methods for VHR RS images.
Related papers
- Task-Aware Dynamic Transformer for Efficient Arbitrary-Scale Image Super-Resolution [8.78015409192613]
Arbitrary-scale super-resolution (ASSR) aims to learn a single model for image super-resolution at arbitrary magnifying scales.
Existing ASSR networks typically comprise an off-the-shelf scale-agnostic feature extractor and an arbitrary scale upsampler.
We propose a Task-Aware Dynamic Transformer (TADT) as an input-adaptive feature extractor for efficient image ASSR.
arXiv Detail & Related papers (2024-08-16T13:35:52Z) - Adaptive Step-size Perception Unfolding Network with Non-local Hybrid Attention for Hyperspectral Image Reconstruction [0.39134031118910273]
We propose an adaptive step-size perception unfolding network (ASPUN), a deep unfolding network based on FISTA algorithm.
In addition, we design a Non-local Hybrid Attention Transformer(NHAT) module for fully leveraging the receptive field advantage of transformer.
Experimental results show that our ASPUN is superior to the existing SOTA algorithms and achieves the best performance.
arXiv Detail & Related papers (2024-07-04T16:09:52Z) - Relating CNN-Transformer Fusion Network for Change Detection [23.025190360146635]
RCTNet introduces an early fusion backbone to exploit both spatial and temporal features.
Experiments demonstrate RCTNet's clear superiority over traditional RS image CD methods.
arXiv Detail & Related papers (2024-07-03T14:58:40Z) - Efficient Visual State Space Model for Image Deblurring [83.57239834238035]
Convolutional neural networks (CNNs) and Vision Transformers (ViTs) have achieved excellent performance in image restoration.
We propose a simple yet effective visual state space model (EVSSM) for image deblurring.
arXiv Detail & Related papers (2024-05-23T09:13:36Z) - SRTransGAN: Image Super-Resolution using Transformer based Generative
Adversarial Network [16.243363392717434]
We propose a transformer-based encoder-decoder network as a generator to generate 2x images and 4x images.
The proposed SRTransGAN outperforms the existing methods by 4.38 % on an average of PSNR and SSIM scores.
arXiv Detail & Related papers (2023-12-04T16:22:39Z) - PTSR: Patch Translator for Image Super-Resolution [16.243363392717434]
We propose a patch translator for image super-resolution (PTSR) to address this problem.
The proposed PTSR is a transformer-based GAN network with no convolution operation.
We introduce a novel patch translator module for regenerating the improved patches utilising multi-head attention.
arXiv Detail & Related papers (2023-10-20T01:45:00Z) - Contextual Learning in Fourier Complex Field for VHR Remote Sensing
Images [64.84260544255477]
transformer-based models demonstrated outstanding potential for learning high-order contextual relationships from natural images with general resolution (224x224 pixels)
We propose a complex self-attention (CSA) mechanism to model the high-order contextual information with less than half computations of naive SA.
By stacking various layers of CSA blocks, we propose the Fourier Complex Transformer (FCT) model to learn global contextual information from VHR aerial images.
arXiv Detail & Related papers (2022-10-28T08:13:33Z) - Vision Transformer with Convolutions Architecture Search [72.70461709267497]
We propose an architecture search method-Vision Transformer with Convolutions Architecture Search (VTCAS)
The high-performance backbone network searched by VTCAS introduces the desirable features of convolutional neural networks into the Transformer architecture.
It enhances the robustness of the neural network for object recognition, especially in the low illumination indoor scene.
arXiv Detail & Related papers (2022-03-20T02:59:51Z) - Rich CNN-Transformer Feature Aggregation Networks for Super-Resolution [50.10987776141901]
Recent vision transformers along with self-attention have achieved promising results on various computer vision tasks.
We introduce an effective hybrid architecture for super-resolution (SR) tasks, which leverages local features from CNNs and long-range dependencies captured by transformers.
Our proposed method achieves state-of-the-art SR results on numerous benchmark datasets.
arXiv Detail & Related papers (2022-03-15T06:52:25Z) - HRFormer: High-Resolution Transformer for Dense Prediction [99.6060997466614]
We present a High-Resolution Transformer (HRFormer) that learns high-resolution representations for dense prediction tasks.
We take advantage of the multi-resolution parallel design introduced in high-resolution convolutional networks (HRNet)
We demonstrate the effectiveness of the High-Resolution Transformer on both human pose estimation and semantic segmentation tasks.
arXiv Detail & Related papers (2021-10-18T15:37:58Z) - Robust Reference-based Super-Resolution via C2-Matching [77.51610726936657]
Super-Resolution (Ref-SR) has recently emerged as a promising paradigm to enhance a low-resolution (LR) input image by introducing an additional high-resolution (HR) reference image.
Existing Ref-SR methods mostly rely on implicit correspondence matching to borrow HR textures from reference images to compensate for the information loss in input images.
We propose C2-Matching, which produces explicit robust matching crossing transformation and resolution.
arXiv Detail & Related papers (2021-06-03T16:40:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.