Detecting Building Changes with Off-Nadir Aerial Images
- URL: http://arxiv.org/abs/2301.10922v1
- Date: Thu, 26 Jan 2023 04:04:14 GMT
- Title: Detecting Building Changes with Off-Nadir Aerial Images
- Authors: Chao Pang, Jiang Wu, Jian Ding, Can Song, Gui-Song Xia
- Abstract summary: tilted viewing of off-nadir aerial images brings severe challenges to the building change detection problem.
We present a multi-task guided change detection network model, named as MTGCD-Net.
Our model achieves superior performance over the previous state-of-the-art competitors.
- Score: 37.58581646229069
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The tilted viewing nature of the off-nadir aerial images brings severe
challenges to the building change detection (BCD) problem: the mismatch of the
nearby buildings and the semantic ambiguity of the building facades. To tackle
these challenges, we present a multi-task guided change detection network
model, named as MTGCD-Net. The proposed model approaches the specific BCD
problem by designing three auxiliary tasks, including: (1) a pixel-wise
classification task to predict the roofs and facades of buildings; (2) an
auxiliary task for learning the roof-to-footprint offsets of each building to
account for the misalignment between building roof instances; and (3) an
auxiliary task for learning the identical roof matching flow between
bi-temporal aerial images to tackle the building roof mismatch problem. These
auxiliary tasks provide indispensable and complementary building parsing and
matching information. The predictions of the auxiliary tasks are finally fused
to the main building change detection branch with a multi-modal distillation
module. To train and test models for the BCD problem with off-nadir aerial
images, we create a new benchmark dataset, named BANDON. Extensive experiments
demonstrate that our model achieves superior performance over the previous
state-of-the-art competitors.
Related papers
- RSBuilding: Towards General Remote Sensing Image Building Extraction and Change Detection with Foundation Model [22.56227565913003]
We propose a comprehensive remote sensing image building model, termed RSBuilding, developed from the perspective of the foundation model.
RSBuilding is designed to enhance cross-scene generalization and task understanding.
Our model was trained on a dataset comprising up to 245,000 images and validated on multiple building extraction and change detection datasets.
arXiv Detail & Related papers (2024-03-12T11:51:59Z) - DiAD: A Diffusion-based Framework for Multi-class Anomaly Detection [55.48770333927732]
We propose a Difusion-based Anomaly Detection (DiAD) framework for multi-class anomaly detection.
It consists of a pixel-space autoencoder, a latent-space Semantic-Guided (SG) network with a connection to the stable diffusion's denoising network, and a feature-space pre-trained feature extractor.
Experiments on MVTec-AD and VisA datasets demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2023-12-11T18:38:28Z) - Learning Efficient Unsupervised Satellite Image-based Building Damage
Detection [43.06758527206676]
Building Damage Detection (BDD) methods always require labour-intensive pixel-level annotations of buildings and their conditions.
In this paper, we investigate a challenging yet practical scenario of U-BDD, where only unlabelled pre- and post-disaster satellite image pairs are provided.
We present a novel self-supervised framework, U-BDD++, which improves upon the U-BDD baseline by addressing domain-specific issues associated with satellite imagery.
arXiv Detail & Related papers (2023-12-04T02:20:35Z) - Fine-grained building roof instance segmentation based on domain adapted
pretraining and composite dual-backbone [13.09940764764909]
We propose a framework to fulfill semantic interpretation of individual buildings with high-resolution optical satellite imagery.
Specifically, the leveraged domain adapted pretraining strategy and composite dual-backbone greatly facilitates the discnative feature learning.
Experiment results show that our approach ranks in the first place of the 2023 IEEE GRSS Data Fusion Contest.
arXiv Detail & Related papers (2023-08-10T05:54:57Z) - BCE-Net: Reliable Building Footprints Change Extraction based on
Historical Map and Up-to-Date Images using Contrastive Learning [13.543968710641746]
We develop a contrastive learning approach by validating historical building footprints against single up-to-date remotely sensed images.
We employ a deformable convolutional neural network to learn offsets intuitively.
Our method achieved an F1 score of 94.63%, which surpasses that of the state-of-the-art method.
arXiv Detail & Related papers (2023-04-14T12:00:47Z) - Fast Inference and Transfer of Compositional Task Structures for
Few-shot Task Generalization [101.72755769194677]
We formulate it as a few-shot reinforcement learning problem where a task is characterized by a subtask graph.
Our multi-task subtask graph inferencer (MTSGI) first infers the common high-level task structure in terms of the subtask graph from the training tasks.
Our experiment results on 2D grid-world and complex web navigation domains show that the proposed method can learn and leverage the common underlying structure of the tasks for faster adaptation to the unseen tasks.
arXiv Detail & Related papers (2022-05-25T10:44:25Z) - Robust Self-Supervised LiDAR Odometry via Representative Structure
Discovery and 3D Inherent Error Modeling [67.75095378830694]
We develop a two-stage odometry estimation network, where we obtain the ego-motion by estimating a set of sub-region transformations.
In this paper, we aim to alleviate the influence of unreliable structures in training, inference and mapping phases.
Our two-frame odometry outperforms the previous state of the arts by 16%/12% in terms of translational/rotational errors.
arXiv Detail & Related papers (2022-02-27T12:52:27Z) - Unpaired Referring Expression Grounding via Bidirectional Cross-Modal
Matching [53.27673119360868]
Referring expression grounding is an important and challenging task in computer vision.
We propose a novel bidirectional cross-modal matching (BiCM) framework to address these challenges.
Our framework outperforms previous works by 6.55% and 9.94% on two popular grounding datasets.
arXiv Detail & Related papers (2022-01-18T01:13:19Z) - A Multi-Task Deep Learning Framework for Building Footprint Segmentation [0.0]
We propose a joint optimization scheme for the task of building footprint delineation.
We also introduce two auxiliary tasks; image reconstruction and building footprint boundary segmentation.
In particular, we propose a deep multi-task learning (MTL) based unified fully convolutional framework.
arXiv Detail & Related papers (2021-04-19T15:07:27Z) - RescueNet: Joint Building Segmentation and Damage Assessment from
Satellite Imagery [83.49145695899388]
RescueNet is a unified model that can simultaneously segment buildings and assess the damage levels to individual buildings and can be trained end-to-end.
RescueNet is tested on the large scale and diverse xBD dataset and achieves significantly better building segmentation and damage classification performance than previous methods.
arXiv Detail & Related papers (2020-04-15T19:52:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.