Cross-attention Spatio-temporal Context Transformer for Semantic
Segmentation of Historical Maps
- URL: http://arxiv.org/abs/2310.12616v1
- Date: Thu, 19 Oct 2023 09:49:58 GMT
- Title: Cross-attention Spatio-temporal Context Transformer for Semantic
Segmentation of Historical Maps
- Authors: Sidi Wu, Yizi Chen, Konrad Schindler, Lorenz Hurni
- Abstract summary: Historical maps provide useful-temporal information on the Earth's surface before modern earth observation techniques came into being.
Aleatoric uncertainty known as data-dependent uncertainty inherent in the drawing/fading defects of the original map sheets.
We propose a U--based network that fuses maps that aggregating information at a larger range as well as through a temporal sequence.
- Score: 18.016789471815855
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Historical maps provide useful spatio-temporal information on the Earth's
surface before modern earth observation techniques came into being. To extract
information from maps, neural networks, which gain wide popularity in recent
years, have replaced hand-crafted map processing methods and tedious manual
labor. However, aleatoric uncertainty, known as data-dependent uncertainty,
inherent in the drawing/scanning/fading defects of the original map sheets and
inadequate contexts when cropping maps into small tiles considering the memory
limits of the training process, challenges the model to make correct
predictions. As aleatoric uncertainty cannot be reduced even with more training
data collected, we argue that complementary spatio-temporal contexts can be
helpful. To achieve this, we propose a U-Net-based network that fuses
spatio-temporal features with cross-attention transformers (U-SpaTem),
aggregating information at a larger spatial range as well as through a temporal
sequence of images. Our model achieves a better performance than other
state-or-art models that use either temporal or spatial contexts. Compared with
pure vision transformers, our model is more lightweight and effective. To the
best of our knowledge, leveraging both spatial and temporal contexts have been
rarely explored before in the segmentation task. Even though our application is
on segmenting historical maps, we believe that the method can be transferred
into other fields with similar problems like temporal sequences of satellite
images. Our code is freely accessible at
https://github.com/chenyizi086/wu.2023.sigspatial.git.
Related papers
- TASeg: Temporal Aggregation Network for LiDAR Semantic Segmentation [80.13343299606146]
We propose a Temporal LiDAR Aggregation and Distillation (TLAD) algorithm, which leverages historical priors to assign different aggregation steps for different classes.
To make full use of temporal images, we design a Temporal Image Aggregation and Fusion (TIAF) module, which can greatly expand the camera FOV.
We also develop a Static-Moving Switch Augmentation (SMSA) algorithm, which utilizes sufficient temporal information to enable objects to switch their motion states freely.
arXiv Detail & Related papers (2024-07-13T03:00:16Z) - REPLAY: Modeling Time-Varying Temporal Regularities of Human Mobility for Location Prediction over Sparse Trajectories [7.493786214342181]
We propose REPLAY, a general RNN architecture learning to capture the time-varying temporal regularities for location prediction.
Specifically, REPLAY not only resorts to distances in sparse trajectories to search for the informative hidden past states, but also accommodates the time-varying temporal regularities.
Results show that REPLAY consistently and significantly outperforms state-of-the-art methods by 7.7%-10.9% in the location prediction task.
arXiv Detail & Related papers (2024-02-26T05:28:36Z) - PASTA: PArallel Spatio-Temporal Attention with spatial auto-correlation
gating for fine-grained crowd flow prediction [33.08230699138568]
We introduce a neural network named PArallel Spatio with spatial auto-correlation gating.
The components in our approach include spatial auto-correlation gating, multi-scale residual block, and temporal attention gating module.
arXiv Detail & Related papers (2023-10-02T14:10:42Z) - Temporal Smoothness Regularisers for Neural Link Predictors [8.975480841443272]
We show that a simple method like TNTComplEx can produce significantly more accurate results than state-of-the-art methods.
We also evaluate the impact of a wide range of temporal smoothing regularisers on two state-of-the-art temporal link prediction models.
arXiv Detail & Related papers (2023-09-16T16:52:49Z) - TempSAL -- Uncovering Temporal Information for Deep Saliency Prediction [64.63645677568384]
We introduce a novel saliency prediction model that learns to output saliency maps in sequential time intervals.
Our approach locally modulates the saliency predictions by combining the learned temporal maps.
Our code will be publicly available on GitHub.
arXiv Detail & Related papers (2023-01-05T22:10:16Z) - Spatio-temporal predictive tasks for abnormal event detection in videos [60.02503434201552]
We propose new constrained pretext tasks to learn object level normality patterns.
Our approach consists in learning a mapping between down-scaled visual queries and their corresponding normal appearance and motion characteristics.
Experiments on several benchmark datasets demonstrate the effectiveness of our approach to localize and track anomalies.
arXiv Detail & Related papers (2022-10-27T19:45:12Z) - Detection of Deepfake Videos Using Long Distance Attention [73.6659488380372]
Most existing detection methods treat the problem as a vanilla binary classification problem.
In this paper, the problem is treated as a special fine-grained classification problem since the differences between fake and real faces are very subtle.
A spatial-temporal model is proposed which has two components for capturing spatial and temporal forgery traces in global perspective.
arXiv Detail & Related papers (2021-06-24T08:33:32Z) - CutPaste: Self-Supervised Learning for Anomaly Detection and
Localization [59.719925639875036]
We propose a framework for building anomaly detectors using normal training data only.
We first learn self-supervised deep representations and then build a generative one-class classifier on learned representations.
Our empirical study on MVTec anomaly detection dataset demonstrates the proposed algorithm is general to be able to detect various types of real-world defects.
arXiv Detail & Related papers (2021-04-08T19:04:55Z) - Combining Deep Learning and Mathematical Morphology for Historical Map
Segmentation [22.050293193182238]
Main map features can be retrieved and tracked through the time for subsequent thematic analysis.
The goal of this work is the vectorization step, i.e., the extraction of vector shapes of the objects of interest from images of maps.
We are particularly interested in closed shape detection such as buildings, building blocks, gardens, rivers, etc. in order to monitor their temporal evolution.
arXiv Detail & Related papers (2021-01-06T17:24:57Z) - Geography-Aware Self-Supervised Learning [79.4009241781968]
We show that due to their different characteristics, a non-trivial gap persists between contrastive and supervised learning on standard benchmarks.
We propose novel training methods that exploit the spatially aligned structure of remote sensing data.
Our experiments show that our proposed method closes the gap between contrastive and supervised learning on image classification, object detection and semantic segmentation for remote sensing.
arXiv Detail & Related papers (2020-11-19T17:29:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.