Be the Change You Want to See: Revisiting Remote Sensing Change Detection Practices
- URL: http://arxiv.org/abs/2507.03367v1
- Date: Fri, 04 Jul 2025 08:01:28 GMT
- Title: Be the Change You Want to See: Revisiting Remote Sensing Change Detection Practices
- Authors: Blaž Rolih, Matic Fučka, Filip Wolf, Luka Čehovin Zajc,
- Abstract summary: Remote sensing change detection aims to localize semantic changes between images of the same location captured at different times.<n>Most fail to measure the performance contribution of fundamental design choices such as backbone selection, pre-training strategies, and training configurations.<n>We claim that such fundamental design choices often improve performance even more significantly than the addition of new architectural components.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Remote sensing change detection aims to localize semantic changes between images of the same location captured at different times. In the past few years, newer methods have attributed enhanced performance to the additions of new and complex components to existing architectures. Most fail to measure the performance contribution of fundamental design choices such as backbone selection, pre-training strategies, and training configurations. We claim that such fundamental design choices often improve performance even more significantly than the addition of new architectural components. Due to that, we systematically revisit the design space of change detection models and analyse the full potential of a well-optimised baseline. We identify a set of fundamental design choices that benefit both new and existing architectures. Leveraging this insight, we demonstrate that when carefully designed, even an architecturally simple model can match or surpass state-of-the-art performance on six challenging change detection datasets. Our best practices generalise beyond our architecture and also offer performance improvements when applied to related methods, indicating that the space of fundamental design choices has been underexplored. Our guidelines and architecture provide a strong foundation for future methods, emphasizing that optimizing core components is just as important as architectural novelty in advancing change detection performance. Code: https://github.com/blaz-r/BTC-change-detection
Related papers
- How Does Sequence Modeling Architecture Influence Base Capabilities of Pre-trained Language Models? Exploring Key Architecture Design Principles to Avoid Base Capabilities Degradation [37.57021686999279]
This work focuses on the impact of sequence modeling architectures on base capabilities.<n>We first point out that the mixed domain pre-training setting fails to adequately reveal the differences in base capabilities among various architectures.<n>Next, we analyze the base capabilities of stateful sequence modeling architectures, and find that they exhibit significant degradation in base capabilities compared to the Transformer.
arXiv Detail & Related papers (2025-05-24T05:40:03Z) - An Architectural Approach to Enhance Deep Long-Tailed Learning [5.214135587370722]
We introduce Long-Tailed Differential Architecture Search (LTDAS)<n>We conduct extensive experiments to explore architectural components that demonstrate better performance on long-tailed data.<n>This ensures that the architecture obtained through our search process incorporates superior components.
arXiv Detail & Related papers (2024-11-09T07:19:56Z) - Exploring the design space of deep-learning-based weather forecasting systems [56.129148006412855]
This paper systematically analyzes the impact of different design choices on deep-learning-based weather forecasting systems.
We study fixed-grid architectures such as UNet, fully convolutional architectures, and transformer-based models.
We propose a hybrid system that combines the strong performance of fixed-grid models with the flexibility of grid-invariant architectures.
arXiv Detail & Related papers (2024-10-09T22:25:50Z) - Towards Assessing Spread in Sets of Software Architecture Designs [2.2120851074630177]
We propose a quality indicator for the spread that assesses the diversity of alternatives by taking into account architectural features.
We demonstrate how our architectural quality indicator can be applied to a dataset from the literature.
arXiv Detail & Related papers (2024-02-29T13:52:39Z) - A Change Detection Reality Check [4.122463133837605]
In recent years, there has been an explosion of proposed change detection deep learning architectures in the remote sensing literature.
In this paper we perform experiments which conclude a simple U-Net segmentation baseline without training tricks or complicated architectural changes is still a top performer for the task of change detection.
arXiv Detail & Related papers (2024-02-10T17:02:53Z) - The Impact of Different Backbone Architecture on Autonomous Vehicle
Dataset [120.08736654413637]
The quality of the features extracted by the backbone architecture can have a significant impact on the overall detection performance.
Our study evaluates three well-known autonomous vehicle datasets, namely KITTI, NuScenes, and BDD, to compare the performance of different backbone architectures on object detection tasks.
arXiv Detail & Related papers (2023-09-15T17:32:15Z) - Heterogeneous Continual Learning [88.53038822561197]
We propose a novel framework to tackle the continual learning (CL) problem with changing network architectures.
We build on top of the distillation family of techniques and modify it to a new setting where a weaker model takes the role of a teacher.
We also propose Quick Deep Inversion (QDI) to recover prior task visual features to support knowledge transfer.
arXiv Detail & Related papers (2023-06-14T15:54:42Z) - Demystify Transformers & Convolutions in Modern Image Deep Networks [80.16624587948368]
This paper aims to identify the real gains of popular convolution and attention operators through a detailed study.<n>We find that the key difference among these feature transformation modules, such as attention or convolution, lies in their spatial feature aggregation approach.<n>Various STMs are integrated into this unified framework for comprehensive comparative analysis.
arXiv Detail & Related papers (2022-11-10T18:59:43Z) - On the interplay of adversarial robustness and architecture components:
patches, convolution and attention [65.20660287833537]
We study the effect of adversarial training on the interpretability of the learnt features and robustness to unseen threat models.
An ablation from ResNet to ConvNeXt reveals key architectural changes leading to almost $10%$ higher $ell_infty$-robustness.
arXiv Detail & Related papers (2022-09-14T22:02:32Z) - Slimmable Domain Adaptation [112.19652651687402]
We introduce a simple framework, Slimmable Domain Adaptation, to improve cross-domain generalization with a weight-sharing model bank.
Our framework surpasses other competing approaches by a very large margin on multiple benchmarks.
arXiv Detail & Related papers (2022-06-14T06:28:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.