Scalable Multi-Temporal Remote Sensing Change Data Generation via
Simulating Stochastic Change Process
- URL: http://arxiv.org/abs/2309.17031v1
- Date: Fri, 29 Sep 2023 07:37:26 GMT
- Title: Scalable Multi-Temporal Remote Sensing Change Data Generation via
Simulating Stochastic Change Process
- Authors: Zhuo Zheng, Shiqi Tian, Ailong Ma, Liangpei Zhang, Yanfei Zhong
- Abstract summary: We present a scalable multi-temporal remote sensing change data generator via generative modeling.
Our main idea is to simulate a change process over time.
To solve these two problems, we present the change generator (Changen), a GAN-based GPCM, enabling controllable object change data generation.
- Score: 21.622442722863028
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Understanding the temporal dynamics of Earth's surface is a mission of
multi-temporal remote sensing image analysis, significantly promoted by deep
vision models with its fuel -- labeled multi-temporal images. However,
collecting, preprocessing, and annotating multi-temporal remote sensing images
at scale is non-trivial since it is expensive and knowledge-intensive. In this
paper, we present a scalable multi-temporal remote sensing change data
generator via generative modeling, which is cheap and automatic, alleviating
these problems. Our main idea is to simulate a stochastic change process over
time. We consider the stochastic change process as a probabilistic semantic
state transition, namely generative probabilistic change model (GPCM), which
decouples the complex simulation problem into two more trackable sub-problems,
\ie, change event simulation and semantic change synthesis. To solve these two
problems, we present the change generator (Changen), a GAN-based GPCM, enabling
controllable object change data generation, including customizable object
property, and change event. The extensive experiments suggest that our Changen
has superior generation capability, and the change detectors with Changen
pre-training exhibit excellent transferability to real-world change datasets.
Related papers
- Novel Change Detection Framework in Remote Sensing Imagery Using Diffusion Models and Structural Similarity Index (SSIM) [0.0]
Change detection is a crucial task in remote sensing, enabling the monitoring of environmental changes, urban growth, and disaster impact.
Recent advancements in machine learning, particularly generative models like diffusion models, offer new opportunities for enhancing change detection accuracy.
We propose a novel change detection framework that combines the strengths of Stable Diffusion models with the Structural Similarity Index (SSIM) to create robust and interpretable change maps.
arXiv Detail & Related papers (2024-08-20T07:54:08Z) - Transformer for Multitemporal Hyperspectral Image Unmixing [17.365895881435563]
We propose the Multitemporal Hyperspectral Image Unmixing Transformer (MUFormer), an end-to-end unsupervised deep learning model.
We introduce two key modules: the Global Awareness Module (GAM) and the Change Enhancement Module (CEM)
The synergy between these modules allows for capturing semantic information regarding endmember and abundance changes.
arXiv Detail & Related papers (2024-07-15T04:02:01Z) - Changen2: Multi-Temporal Remote Sensing Generative Change Foundation Model [62.337749660637755]
We present change data generators based on generative models which are cheap and automatic.
Changen2 is a generative change foundation model that can be trained at scale via self-supervision.
The resulting model possesses inherent zero-shot change detection capabilities and excellent transferability.
arXiv Detail & Related papers (2024-06-26T01:03:39Z) - ChangeBind: A Hybrid Change Encoder for Remote Sensing Change Detection [16.62779899494721]
Change detection (CD) is a fundamental task in remote sensing (RS) which aims to detect the semantic changes between the same geographical regions at different time stamps.
We propose an effective Siamese-based framework to encode the semantic changes occurring in the bi-temporal RS images.
arXiv Detail & Related papers (2024-04-26T17:47:14Z) - Time Travelling Pixels: Bitemporal Features Integration with Foundation
Model for Remote Sensing Image Change Detection [28.40070234949818]
Time Travelling Pixels (TTP) is a novel approach that integrates the latent knowledge foundation model into change detection.
The state-of-the-art results obtained on the LEVIR-CD underscore the efficacy of the TTP.
arXiv Detail & Related papers (2023-12-23T08:56:52Z) - iTransformer: Inverted Transformers Are Effective for Time Series Forecasting [62.40166958002558]
We propose iTransformer, which simply applies the attention and feed-forward network on the inverted dimensions.
The iTransformer model achieves state-of-the-art on challenging real-world datasets.
arXiv Detail & Related papers (2023-10-10T13:44:09Z) - Learning Modulated Transformation in GANs [69.95217723100413]
We equip the generator in generative adversarial networks (GANs) with a plug-and-play module, termed as modulated transformation module (MTM)
MTM predicts spatial offsets under the control of latent codes, based on which the convolution operation can be applied at variable locations.
It is noteworthy that towards human generation on the challenging TaiChi dataset, we improve the FID of StyleGAN3 from 21.36 to 13.60, demonstrating the efficacy of learning modulated geometry transformation.
arXiv Detail & Related papers (2023-08-29T17:51:22Z) - Gait Recognition in the Wild with Multi-hop Temporal Switch [81.35245014397759]
gait recognition in the wild is a more practical problem that has attracted the attention of the community of multimedia and computer vision.
This paper presents a novel multi-hop temporal switch method to achieve effective temporal modeling of gait patterns in real-world scenes.
arXiv Detail & Related papers (2022-09-01T10:46:09Z) - Transformer Inertial Poser: Attention-based Real-time Human Motion
Reconstruction from Sparse IMUs [79.72586714047199]
We propose an attention-based deep learning method to reconstruct full-body motion from six IMU sensors in real-time.
Our method achieves new state-of-the-art results both quantitatively and qualitatively, while being simple to implement and smaller in size.
arXiv Detail & Related papers (2022-03-29T16:24:52Z) - Analogous to Evolutionary Algorithm: Designing a Unified Sequence Model [58.17021225930069]
We explain the rationality of Vision Transformer by analogy with the proven practical Evolutionary Algorithm (EA)
We propose a more efficient EAT model, and design task-related heads to deal with different tasks more flexibly.
Our approach achieves state-of-the-art results on the ImageNet classification task compared with recent vision transformer works.
arXiv Detail & Related papers (2021-05-31T16:20:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.