SRC-Net: Bi-Temporal Spatial Relationship Concerned Network for Change Detection
- URL: http://arxiv.org/abs/2406.05668v2
- Date: Thu, 27 Jun 2024 14:55:41 GMT
- Title: SRC-Net: Bi-Temporal Spatial Relationship Concerned Network for Change Detection
- Authors: Hongjia Chen, Xin Xu, Fangling Pu,
- Abstract summary: Change detection (CD) in remote sensing imagery is a crucial task with applications in environmental monitoring, urban development, and disaster management.
We propose SRC-Net: a bi-temporal spatial relationship concerned network for CD.
- Score: 9.682463974799893
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Change detection (CD) in remote sensing imagery is a crucial task with applications in environmental monitoring, urban development, and disaster management. CD involves utilizing bi-temporal images to identify changes over time. The bi-temporal spatial relationships between features at the same location at different times play a key role in this process. However, existing change detection networks often do not fully leverage these spatial relationships during bi-temporal feature extraction and fusion. In this work, we propose SRC-Net: a bi-temporal spatial relationship concerned network for CD. The proposed SRC-Net includes a Perception and Interaction Module that incorporates spatial relationships and establishes a cross-branch perception mechanism to enhance the precision and robustness of feature extraction. Additionally, a Patch-Mode joint Feature Fusion Module is introduced to address information loss in current methods. It considers different change modes and concerns about spatial relationships, resulting in more expressive fusion features. Furthermore, we construct a novel network using these two relationship concerned modules and conducted experiments on the LEVIR-CD and WHU Building datasets. The experimental results demonstrate that our network outperforms state-of-the-art (SOTA) methods while maintaining a modest parameter count. We believe our approach sets a new paradigm for change detection and will inspire further advancements in the field. The code and models are publicly available at https://github.com/Chnja/SRCNet.
Related papers
- DDLNet: Boosting Remote Sensing Change Detection with Dual-Domain Learning [5.932234366793244]
Change sensing (RSCD) aims to identify the changes of interest in a region by analyzing multi-temporal remote sensing images.
Existing RSCD methods are devoted to contextual modeling in the spatial domain to enhance the changes of interest.
We propose DNet, a RSCD network based on dual-domain learning (i.e. frequency and spatial domains)
arXiv Detail & Related papers (2024-06-19T14:54:09Z) - ELGC-Net: Efficient Local-Global Context Aggregation for Remote Sensing Change Detection [65.59969454655996]
We propose an efficient change detection framework, ELGC-Net, which leverages rich contextual information to precisely estimate change regions.
Our proposed ELGC-Net sets a new state-of-the-art performance in remote sensing change detection benchmarks.
We also introduce ELGC-Net-LW, a lighter variant with significantly reduced computational complexity, suitable for resource-constrained settings.
arXiv Detail & Related papers (2024-03-26T17:46:25Z) - Spatial-Temporal Graph Enhanced DETR Towards Multi-Frame 3D Object Detection [54.041049052843604]
We present STEMD, a novel end-to-end framework that enhances the DETR-like paradigm for multi-frame 3D object detection.
First, to model the inter-object spatial interaction and complex temporal dependencies, we introduce the spatial-temporal graph attention network.
Finally, it poses a challenge for the network to distinguish between the positive query and other highly similar queries that are not the best match.
arXiv Detail & Related papers (2023-07-01T13:53:14Z) - STNet: Spatial and Temporal feature fusion network for change detection
in remote sensing images [5.258365841490956]
We propose STNet, a remote sensing change detection network based on spatial and temporal feature fusions.
Experimental results on three benchmark datasets for RSCD demonstrate that the proposed method achieves the state-of-the-art performance.
arXiv Detail & Related papers (2023-04-22T14:40:41Z) - Dsfer-Net: A Deep Supervision and Feature Retrieval Network for Bitemporal Change Detection Using Modern Hopfield Networks [35.415260892693745]
We propose a Deep Supervision and FEature Retrieval network (Dsfer-Net) for bitemporal change detection.
Specifically, the highly representative deep features of bitemporal images are jointly extracted through a fully convolutional Siamese network.
Our end-to-end network establishes a novel framework by aggregating retrieved features and feature pairs from different layers.
arXiv Detail & Related papers (2023-04-03T16:01:03Z) - Joint Spatio-Temporal Modeling for the Semantic Change Detection in
Remote Sensing Images [22.72105435238235]
We propose a Semantic Change (SCanFormer) to explicitly model the 'from-to' semantic transitions between the bi-temporal RSIss.
Then, we introduce a semantic learning scheme to leverage the Transformer-temporal constraints, which are coherent to the SCD task, to guide the learning of semantic changes.
The resulting network (SCanNet) outperforms the baseline method in terms of both detection of critical semantic changes and semantic consistency in the obtained bi-temporal results.
arXiv Detail & Related papers (2022-12-10T08:49:19Z) - PS-ARM: An End-to-End Attention-aware Relation Mixer Network for Person
Search [56.02761592710612]
We propose a novel attention-aware relation mixer (ARM) for module person search.
Our ARM module is native and does not rely on fine-grained supervision or topological assumptions.
Our PS-ARM achieves state-of-the-art performance on both datasets.
arXiv Detail & Related papers (2022-10-07T10:04:12Z) - Bi-Temporal Semantic Reasoning for the Semantic Change Detection of HR
Remote Sensing Images [17.53683781109742]
We propose a novel CNN architecture for semantic change detection (SCD)
We elaborate on this architecture to model the bi-temporal semantic correlations.
The resulting Bi-temporal Semantic Reasoning Network (Bi-SRNet) contains two types of semantic reasoning blocks to reason both single-temporal and cross-temporal semantic correlations.
arXiv Detail & Related papers (2021-08-13T07:28:09Z) - Cross-modal Consensus Network for Weakly Supervised Temporal Action
Localization [74.34699679568818]
Weakly supervised temporal action localization (WS-TAL) is a challenging task that aims to localize action instances in the given video with video-level categorical supervision.
We propose a cross-modal consensus network (CO2-Net) to tackle this problem.
arXiv Detail & Related papers (2021-07-27T04:21:01Z) - Co-Saliency Spatio-Temporal Interaction Network for Person
Re-Identification in Videos [85.6430597108455]
We propose a novel Co-Saliency Spatio-Temporal Interaction Network (CSTNet) for person re-identification in videos.
It captures the common salient foreground regions among video frames and explores the spatial-temporal long-range context interdependency from such regions.
Multiple spatialtemporal interaction modules within CSTNet are proposed, which exploit the spatial and temporal long-range context interdependencies on such features and spatial-temporal information correlation.
arXiv Detail & Related papers (2020-04-10T10:23:58Z) - A Spatial-Temporal Attentive Network with Spatial Continuity for
Trajectory Prediction [74.00750936752418]
We propose a novel model named spatial-temporal attentive network with spatial continuity (STAN-SC)
First, spatial-temporal attention mechanism is presented to explore the most useful and important information.
Second, we conduct a joint feature sequence based on the sequence and instant state information to make the generative trajectories keep spatial continuity.
arXiv Detail & Related papers (2020-03-13T04:35:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.