Efficient Transformer based Method for Remote Sensing Image Change
Detection
- URL: http://arxiv.org/abs/2103.00208v1
- Date: Sat, 27 Feb 2021 13:08:46 GMT
- Title: Efficient Transformer based Method for Remote Sensing Image Change
Detection
- Authors: Hao Chen, Zipeng Qi and Zhenwei Shi
- Abstract summary: High-resolution remote sensing CD remains challenging due to the complexity of objects in the scene.
We propose a bitemporal image transformer (BiT) to efficiently and effectively model contexts within the spatial-temporal domain.
BiT-based model significantly outperforms the purely convolutional baseline using only 3 times lower computational costs and model parameters.
- Score: 17.553240434628087
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modern change detection (CD) has achieved remarkable success by the powerful
discriminative ability of deep convolutions. However, high-resolution remote
sensing CD remains challenging due to the complexity of objects in the scene.
The objects with the same semantic concept show distinct spectral behaviors at
different times and different spatial locations. Modeling interactions between
global semantic concepts is critical for change recognition. Most recent change
detection pipelines using pure convolutions are still struggling to relate
long-range concepts in space-time. Non-local self-attention approaches show
promising performance via modeling dense relations among pixels, yet are
computationally inefficient. In this paper, we propose a bitemporal image
transformer (BiT) to efficiently and effectively model contexts within the
spatial-temporal domain. Our intuition is that the high-level concepts of the
change of interest can be represented by a few visual words, i.e., semantic
tokens. To achieve this, we express the bitemporal image into a few tokens, and
use a transformer encoder to model contexts in the compact token-based
space-time. The learned context-rich tokens are then feedback to the
pixel-space for refining the original features via a transformer decoder. We
incorporate BiT in a deep feature differencing-based CD framework. Extensive
experiments on three public CD datasets demonstrate the effectiveness and
efficiency of the proposed method. Notably, our BiT-based model significantly
outperforms the purely convolutional baseline using only 3 times lower
computational costs and model parameters. Based on a naive backbone (ResNet18)
without sophisticated structures (e.g., FPN, UNet), our model surpasses several
state-of-the-art CD methods, including better than two recent attention-based
methods in terms of efficiency and accuracy. Our code will be made public.
Related papers
- Efficient Point Transformer with Dynamic Token Aggregating for Point Cloud Processing [19.73918716354272]
We propose an efficient point TransFormer with Dynamic Token Aggregating (DTA-Former) for point cloud representation and processing.
It achieves SOTA performance with up to 30$times$ faster than prior point Transformers on ModelNet40, ShapeNet, and airborne MultiSpectral LiDAR (MS-LiDAR) datasets.
arXiv Detail & Related papers (2024-05-23T20:50:50Z) - Distance Weighted Trans Network for Image Completion [52.318730994423106]
We propose a new architecture that relies on Distance-based Weighted Transformer (DWT) to better understand the relationships between an image's components.
CNNs are used to augment the local texture information of coarse priors.
DWT blocks are used to recover certain coarse textures and coherent visual structures.
arXiv Detail & Related papers (2023-10-11T12:46:11Z) - Efficient Transformer-based 3D Object Detection with Dynamic Token
Halting [19.88560740238657]
We propose an effective approach for accelerating transformer-based 3D object detectors by dynamically halting tokens at different layers.
Although halting a token is a non-differentiable operation, our method allows for differentiable end-to-end learning.
Our framework allows halted tokens to be reused to inform the model's predictions through a straightforward token recycling mechanism.
arXiv Detail & Related papers (2023-03-09T07:26:49Z) - ClusTR: Exploring Efficient Self-attention via Clustering for Vision
Transformers [70.76313507550684]
We propose a content-based sparse attention method, as an alternative to dense self-attention.
Specifically, we cluster and then aggregate key and value tokens, as a content-based method of reducing the total token count.
The resulting clustered-token sequence retains the semantic diversity of the original signal, but can be processed at a lower computational cost.
arXiv Detail & Related papers (2022-08-28T04:18:27Z) - Modeling Image Composition for Complex Scene Generation [77.10533862854706]
We present a method that achieves state-of-the-art results on layout-to-image generation tasks.
After compressing RGB images into patch tokens, we propose the Transformer with Focal Attention (TwFA) for exploring dependencies of object-to-object, object-to-patch and patch-to-patch.
arXiv Detail & Related papers (2022-06-02T08:34:25Z) - Unifying Voxel-based Representation with Transformer for 3D Object
Detection [143.91910747605107]
We present a unified framework for multi-modality 3D object detection, named UVTR.
The proposed method aims to unify multi-modality representations in the voxel space for accurate and robust single- or cross-modality 3D detection.
UVTR achieves leading performance in the nuScenes test set with 69.7%, 55.1%, and 71.1% NDS for LiDAR, camera, and multi-modality inputs, respectively.
arXiv Detail & Related papers (2022-06-01T17:02:40Z) - Joint Spatial-Temporal and Appearance Modeling with Transformer for
Multiple Object Tracking [59.79252390626194]
We propose a novel solution named TransSTAM, which leverages Transformer to model both the appearance features of each object and the spatial-temporal relationships among objects.
The proposed method is evaluated on multiple public benchmarks including MOT16, MOT17, and MOT20, and it achieves a clear performance improvement in both IDF1 and HOTA.
arXiv Detail & Related papers (2022-05-31T01:19:18Z) - PnP-DETR: Towards Efficient Visual Analysis with Transformers [146.55679348493587]
Recently, DETR pioneered the solution vision tasks with transformers, it directly translates the image feature map into the object result.
Recent transformer-based image recognition model andTT show consistent efficiency gain.
arXiv Detail & Related papers (2021-09-15T01:10:30Z) - CrossViT: Cross-Attention Multi-Scale Vision Transformer for Image
Classification [17.709880544501758]
We propose a dual-branch transformer to combine image patches of different sizes to produce stronger image features.
Our approach processes small-patch and large-patch tokens with two separate branches of different computational complexity.
Our proposed cross-attention only requires linear time for both computational and memory complexity instead of quadratic time otherwise.
arXiv Detail & Related papers (2021-03-27T13:03:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.